算法DataStructure2_LectureNotes_w7

合集下载

全套电子课件:数据结构与算法(第2版)

全套电子课件:数据结构与算法(第2版)
6
1.1.1 数据结构的概念
2.数据结构的定义
数据结构(data structure) B = ( D,R)
数据结构
有穷的结点集合
D中结点间的 有穷关系集合
数据的逻辑结构(logical form ) 存储形式:物理结构(physical form )
7
1.1.1 数据结构的概念
物理结构——存储结构 数据结构的存储形式(存储表示)
1.1.1 数据结构的概念
数据的种类 数值型数据(整数、实数等) 文字型数据(字符、字符串、程序代码) 矩阵、记录 声音、图像
数据总是以某种编码形式出现的
5
1.1 基本概念
1.1.1 数据结构的概念
数据元素(data element) 数据结点,简称结点(node) 描述一个独立事物的名称、数量、特征、性质的 一组相关信息组成一个数据结点 通常,一个结点含有多个数据项(data item) 结点的类型:结构型 关键字(key) 单值类型的结点:只含一个数据项
T(n)是f(n)的 只求T(n)的最高阶
大O函数
忽略其低阶项和常系数
简化T(n)的计算;较客观地反映当n很大时, 算法的时间性能
38
1.2.2 算法的评价标准和评价方 法
3.时间复杂性 示例 设T(n)=n2+4n,有f(n)=n2,则:
n2 + 4n = O(n2) 证明:
因为存在c=2,n0=4,使对于一切n>n0, 恒有
32
1.2.2 算法的评价标准和评价方 法
2.正确性 评价方法:
调试 精心挑选具有“代表性”的数据 只能证明算法有错,不能证明算法无错 人工证明
归纳法
33
1.2.2 算法的评价标准和评价方 法

《数据结构与算法分析》(C++第二版)【美】Clifford A.Shaffer著 课后习题答案 二

《数据结构与算法分析》(C++第二版)【美】Clifford A.Shaffer著 课后习题答案 二

《数据结构与算法分析》(C++第二版)【美】Clifford A.Shaffer著课后习题答案二5Binary Trees5.1 Consider a non-full binary tree. By definition, this tree must have some internalnode X with only one non-empty child. If we modify the tree to removeX, replacing it with its child, the modified tree will have a higher fraction ofnon-empty nodes since one non-empty node and one empty node have been removed.5.2 Use as the base case the tree of one leaf node. The number of degree-2 nodesis 0, and the number of leaves is 1. Thus, the theorem holds.For the induction hypothesis, assume the theorem is true for any tree withn − 1 nodes.For the induction step, consider a tree T with n nodes. Remove from the treeany leaf node, and call the resulting tree T. By the induction hypothesis, Thas one more leaf node than it has nodes of degree 2.Now, restore the leaf node that was removed to form T. There are twopossible cases.(1) If this leaf node is the only child of its parent in T, then the number ofnodes of degree 2 has not changed, nor has the number of leaf nodes. Thus,the theorem holds.(2) If this leaf node is the child of a node in T with degree 2, then that nodehas degree 1 in T. Thus, by restoring the leaf node we are adding one newleaf node and one new node of degree 2. Thus, the theorem holds.By mathematical induction, the theorem is correct.32335.3 Base Case: For the tree of one leaf node, I = 0, E = 0, n = 0, so thetheorem holds.Induction Hypothesis: The theorem holds for the full binary tree containingn internal nodes.Induction Step: Take an arbitrary tree (call it T) of n internal nodes. Selectsome internal node x from T that has two leaves, and remove those twoleaves. Call the resulting tree T’. Tree T’ is full and has n−1 internal nodes,so by the Induction Hypothesis E = I + 2(n − 1).Call the depth of node x as d. Restore the two children of x, each at leveld+1. We have nowadded d to I since x is now once again an internal node.We have now added 2(d + 1) − d = d + 2 to E since we added the two leafnodes, but lost the contribution of x to E. Thus, if before the addition we had E = I + 2(n − 1) (by the induction hypothesis), then after the addition we have E + d = I + d + 2 + 2(n − 1) or E = I + 2n which is correct. Thus,by the principle of mathematical induction, the theorem is correct.5.4 (a) template <class Elem>void inorder(BinNode<Elem>* subroot) {if (subroot == NULL) return; // Empty, do nothingpreorder(subroot->left());visit(subroot); // Perform desired actionpreorder(subroot->right());}(b) template <class Elem>void postorder(BinNode<Elem>* subroot) {if (subroot == NULL) return; // Empty, do nothingpreorder(subroot->left());preorder(subroot->right());visit(subroot); // Perform desired action}5.5 The key is to search both subtrees, as necessary.template <class Key, class Elem, class KEComp>bool search(BinNode<Elem>* subroot, Key K);if (subroot == NULL) return false;if (subroot->value() == K) return true;if (search(subroot->right())) return true;return search(subroot->left());}34 Chap. 5 Binary Trees5.6 The key is to use a queue to store subtrees to be processed.template <class Elem>void level(BinNode<Elem>* subroot) {AQueue<BinNode<Elem>*> Q;Q.enqueue(subroot);while(!Q.isEmpty()) {BinNode<Elem>* temp;Q.dequeue(temp);if(temp != NULL) {Print(temp);Q.enqueue(temp->left());Q.enqueue(temp->right());}}}5.7 template <class Elem>int height(BinNode<Elem>* subroot) {if (subroot == NULL) return 0; // Empty subtreereturn 1 + max(height(subroot->left()),height(subroot->right()));}5.8 template <class Elem>int count(BinNode<Elem>* subroot) {if (subroot == NULL) return 0; // Empty subtreeif (subroot->isLeaf()) return 1; // A leafreturn 1 + count(subroot->left()) +count(subroot->right());}5.9 (a) Since every node stores 4 bytes of data and 12 bytes of pointers, the overhead fraction is 12/16 = 75%.(b) Since every node stores 16 bytes of data and 8 bytes of pointers, the overhead fraction is 8/24 ≈ 33%.(c) Leaf nodes store 8 bytes of data and 4 bytes of pointers; internal nodesstore 8 bytes of data and 12 bytes of pointers. Since the nodes havedifferent sizes, the total space needed for internal nodes is not the sameas for leaf nodes. Students must be careful to do the calculation correctly,taking the weighting into account. The correct formula looks asfollows, given that there are x internal nodes and x leaf nodes.4x + 12x12x + 20x= 16/32 = 50%.(d) Leaf nodes store 4 bytes of data; internal nodes store 4 bytes of pointers. The formula looks as follows, given that there are x internal nodes and35x leaf nodes:4x4x + 4x= 4/8 = 50%.5.10 If equal valued nodes were allowed to appear in either subtree, then during a search for all nodes of a given value, whenever we encounter a node of that value the search would be required to search in both directions.5.11 This tree is identical to the tree of Figure 5.20(a), except that a node with value 5 will be added as the right child of the node with value 2.5.12 This tree is identical to the tree of Figure 5.20(b), except that the value 24 replaces the value 7, and the leaf node that originally contained 24 is removed from the tree.5.13 template <class Key, class Elem, class KEComp>int smallcount(BinNode<Elem>* root, Key K);if (root == NULL) return 0;if (KEComp.gt(root->value(), K))return smallcount(root->leftchild(), K);elsereturn smallcount(root->leftchild(), K) +smallcount(root->rightchild(), K) + 1;5.14 template <class Key, class Elem, class KEComp>void printRange(BinNode<Elem>* root, int low,int high) {if (root == NULL) return;if (KEComp.lt(high, root->val()) // all to leftprintRange(root->left(), low, high);else if (KEComp.gt(low, root->val())) // all to rightprintRange(root->right(), low, high);else { // Must process both childrenprintRange(root->left(), low, high);PRINT(root->value());printRange(root->right(), low, high);}}5.15 The minimum number of elements is contained in the heap with a single node at depth h − 1, for a total of 2h−1 nodes.The maximum number of elements is contained in the heap that has completely filled up level h − 1, for a total of 2h − 1 nodes.5.16 The largest element could be at any leaf node.5.17 The corresponding array will be in the following order (equivalent to level order for the heap):12 9 10 5 4 1 8 7 3 236 Chap. 5 Binary Trees5.18 (a) The array will take on the following order:6 5 3 4 2 1The value 7 will be at the end of the array.(b) The array will take on the following order:7 4 6 3 2 1The value 5 will be at the end of the array.5.19 // Min-heap classtemplate <class Elem, class Comp> class minheap {private:Elem* Heap; // Pointer to the heap arrayint size; // Maximum size of the heapint n; // # of elements now in the heapvoid siftdown(int); // Put element in correct placepublic:minheap(Elem* h, int num, int max) // Constructor{ Heap = h; n = num; size = max; buildHeap(); }int heapsize() const // Return current size{ return n; }bool isLeaf(int pos) const // TRUE if pos a leaf{ return (pos >= n/2) && (pos < n); }int leftchild(int pos) const{ return 2*pos + 1; } // Return leftchild posint rightchild(int pos) const{ return 2*pos + 2; } // Return rightchild posint parent(int pos) const // Return parent position { return (pos-1)/2; }bool insert(const Elem&); // Insert value into heap bool removemin(Elem&); // Remove maximum value bool remove(int, Elem&); // Remove from given pos void buildHeap() // Heapify contents{ for (int i=n/2-1; i>=0; i--) siftdown(i); }};template <class Elem, class Comp>void minheap<Elem, Comp>::siftdown(int pos) { while (!isLeaf(pos)) { // Stop if pos is a leafint j = leftchild(pos); int rc = rightchild(pos);if ((rc < n) && Comp::gt(Heap[j], Heap[rc]))j = rc; // Set j to lesser child’s valueif (!Comp::gt(Heap[pos], Heap[j])) return; // Done37swap(Heap, pos, j);pos = j; // Move down}}template <class Elem, class Comp>bool minheap<Elem, Comp>::insert(const Elem& val) { if (n >= size) return false; // Heap is fullint curr = n++;Heap[curr] = val; // Start at end of heap// Now sift up until curr’s parent < currwhile ((curr!=0) &&(Comp::lt(Heap[curr], Heap[parent(curr)]))) {swap(Heap, curr, parent(curr));curr = parent(curr);}return true;}template <class Elem, class Comp>bool minheap<Elem, Comp>::removemin(Elem& it) { if (n == 0) return false; // Heap is emptyswap(Heap, 0, --n); // Swap max with last valueif (n != 0) siftdown(0); // Siftdown new root valit = Heap[n]; // Return deleted valuereturn true;}38 Chap. 5 Binary Trees// Remove value at specified positiontemplate <class Elem, class Comp>bool minheap<Elem, Comp>::remove(int pos, Elem& it) {if ((pos < 0) || (pos >= n)) return false; // Bad posswap(Heap, pos, --n); // Swap with last valuewhile ((pos != 0) &&(Comp::lt(Heap[pos], Heap[parent(pos)])))swap(Heap, pos, parent(pos)); // Push up if largesiftdown(pos); // Push down if small keyit = Heap[n];return true;}5.20 Note that this summation is similar to Equation 2.5. To solve the summation requires the shifting technique from Chapter 14, so this problem may be too advanced for many students at this time. Note that 2f(n) − f(n) = f(n),but also that:2f(n) − f(n) = n(24+48+616+ ··· +2(log n − 1)n) −n(14+28+316+ ··· +log n − 1n)logn−1i=112i− log n − 1n)= n(1 − 1n− log n − 1n)= n − log n.5.21 Here are the final codes, rather than a picture.l 00h 010i 011e 1000f 1001j 101d 11000a 1100100b 1100101c 110011g 1101k 11139The average code length is 3.234455.22 The set of sixteen characters with equal weight will create a Huffman coding tree that is complete with 16 leaf nodes all at depth 4. Thus, the average code length will be 4 bits. This is identical to the fixed length code. Thus, in this situation, the Huffman coding tree saves no space (and costs no space).5.23 (a) By the prefix property, there can be no character with codes 0, 00, or 001x where “x” stands for any binary string.(b) There must be at least one code with each form 1x, 01x, 000x where“x” could be any binary string (including the empty string).5.24 (a) Q and Z are at level 5, so any string of length n containing only Q’s and Z’s requires 5n bits.(b) O and E are at level 2, so any string of length n containing only O’s and E’s requires 2n bits.(c) The weighted average is5 ∗ 5 + 10 ∗ 4 + 35 ∗ 3 + 50 ∗ 2100bits per character5.25 This is a straightforward modification.// Build a Huffman tree from minheap h1template <class Elem>HuffTree<Elem>*buildHuff(minheap<HuffTree<Elem>*,HHCompare<Elem> >* hl) {HuffTree<Elem> *temp1, *temp2, *temp3;while(h1->heapsize() > 1) { // While at least 2 itemshl->removemin(temp1); // Pull first two treeshl->removemin(temp2); // off the heaptemp3 = new HuffTree<Elem>(temp1, temp2);hl->insert(temp3); // Put the new tree back on listdelete temp1; // Must delete the remnantsdelete temp2; // of the trees we created}return temp3;}6General Trees6.1 The following algorithm is linear on the size of the two trees. // Return TRUE iff t1 and t2 are roots of identical// general treestemplate <class Elem>bool Compare(GTNode<Elem>* t1, GTNode<Elem>* t2) { GTNode<Elem> *c1, *c2;if (((t1 == NULL) && (t2 != NULL)) ||((t2 == NULL) && (t1 != NULL)))return false;if ((t1 == NULL) && (t2 == NULL)) return true;if (t1->val() != t2->val()) return false;c1 = t1->leftmost_child();c2 = t2->leftmost_child();while(!((c1 == NULL) && (c2 == NULL))) {if (!Compare(c1, c2)) return false;if (c1 != NULL) c1 = c1->right_sibling();if (c2 != NULL) c2 = c2->right_sibling();}}6.2 The following algorithm is Θ(n2).// Return true iff t1 and t2 are roots of identical// binary treestemplate <class Elem>bool Compare2(BinNode<Elem>* t1, BinNode<Elem* t2) { BinNode<Elem> *c1, *c2;if (((t1 == NULL) && (t2 != NULL)) ||((t2 == NULL) && (t1 != NULL)))return false;if ((t1 == NULL) && (t2 == NULL)) return true;4041if (t1->val() != t2->val()) return false;if (Compare2(t1->leftchild(), t2->leftchild())if (Compare2(t1->rightchild(), t2->rightchild())return true;if (Compare2(t1->leftchild(), t2->rightchild())if (Compare2(t1->rightchild(), t2->leftchild))return true;return false;}6.3 template <class Elem> // Print, postorder traversalvoid postprint(GTNode<Elem>* subroot) {for (GTNode<Elem>* temp = subroot->leftmost_child();temp != NULL; temp = temp->right_sibling())postprint(temp);if (subroot->isLeaf()) cout << "Leaf: ";else cout << "Internal: ";cout << subroot->value() << "\n";}6.4 template <class Elem> // Count the number of nodesint gencount(GTNode<Elem>* subroot) {if (subroot == NULL) return 0int count = 1;GTNode<Elem>* temp = rt->leftmost_child();while (temp != NULL) {count += gencount(temp);temp = temp->right_sibling();}return count;}6.5 The Weighted Union Rule requires that when two parent-pointer trees are merged, the smaller one’s root becomes a child of the larger one’s root. Thus, we need to keep track of the number of nodes in a tree. To do so, modify the node array to store an integer value with each node. Initially, each node isin its own tree, so the weights for each node begin as 1. Whenever we wishto merge two trees, check the weights of the roots to determine which has more nodes. Then, add to the weight of the final root the weight of the new subtree.6.60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15-1 0 0 0 0 0 0 6 0 0 0 9 0 0 12 06.7 The resulting tree should have the following structure:42 Chap. 6 General TreesNode 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15Parent 4 4 4 4 -1 4 4 0 0 4 9 9 9 12 9 -16.8 For eight nodes labeled 0 through 7, use the following series of equivalences: (0, 1) (2, 3) (4, 5) (6, 7) (4 6) (0, 2) (4 0)This requires checking fourteen parent pointers (two for each equivalence),but none are actually followed since these are all roots. It is possible todouble the number of parent pointers checked by choosing direct children ofroots in each case.6.9 For the “lists of Children” representation, every node stores a data value and a pointer to its list of children. Further, every child (every node except the root)has a record associated with it containing an index and a pointer. Indicatingthe size of the data value as D, the size of a pointer as P and the size of anindex as I, the overhead fraction is3P + ID + 3P + I.For the “Left Child/Right Sibling” representation, every node stores three pointers and a data value, for an overhead fraction of3PD + 3P.The first linked representation of Section 6.3.3 stores with each node a datavalue and a size field (denoted by S). Each child (every node except the root)also has a pointer pointing to it. The overhead fraction is thusS + PD + S + Pmaking it quite efficient.The second linked representation of Section 6.3.3 stores with each node adata value and a pointer to the list of children. Each child (every node exceptthe root) has two additional pointers associated with it to indicate its placeon the parent’s linked list. Thus, the overhead fraction is3PD + 3P.6.10 template <class Elem>BinNode<Elem>* convert(GTNode<Elem>* genroot) {if (genroot == NULL) return NULL;43GTNode<Elem>* gtemp = genroot->leftmost_child();btemp = new BinNode(genroot->val(), convert(gtemp),convert(genroot->right_sibling()));}6.11 • Parent(r) = (r − 1)/k if 0 < r < n.• Ith child(r) = kr + I if kr +I < n.• Left sibling(r) = r − 1 if r mod k = 1 0 < r < n.• Right sibling(r) = r + 1 if r mod k = 0 and r + 1 < n.6.12 (a) The overhead fraction is4(k + 1)4 + 4(k + 1).(b) The overhead fraction is4k16 + 4k.(c) The overhead fraction is4(k + 2)16 + 4(k + 2).(d) The overhead fraction is2k2k + 4.6.13 Base Case: The number of leaves in a non-empty tree of 0 internal nodes is (K − 1)0 + 1 = 1. Thus, the theorem is correct in the base case.Induction Hypothesis: Assume that the theorem is correct for any full Karytree containing n internal nodes.Induction Step: Add K children to an arbitrary leaf node of the tree withn internal nodes. This new tree now has 1 more internal node, and K − 1more leaf nodes, so theorem still holds. Thus, the theorem is correct, by the principle of Mathematical Induction.6.14 (a) CA/BG///FEDD///H/I//(b) CA/BG/FED/H/I6.15 X|P-----| | |C Q R---| |V M44 Chap. 6 General Trees6.16 (a) // Use a helper function with a pass-by-reference// variable to indicate current position in the// node list.template <class Elem>BinNode<Elem>* convert(char* inlist) {int curr = 0;return converthelp(inlist, curr);}// As converthelp processes the node list, curr is// incremented appropriately.template <class Elem>BinNode<Elem>* converthelp(char* inlist,int& curr) {if (inlist[curr] == ’/’) {curr++;return NULL;}BinNode<Elem>* temp = new BinNode(inlist[curr++], NULL, NULL);temp->left = converthelp(inlist, curr);temp->right = converthelp(inlist, curr);return temp;}(b) // Use a helper function with a pass-by-reference // variable to indicate current position in the// node list.template <class Elem>BinNode<Elem>* convert(char* inlist) {int curr = 0;return converthelp(inlist, curr);}// As converthelp processes the node list, curr is// incremented appropriately.template <class Elem>BinNode<Elem>* converthelp(char* inlist,int& curr) {if (inlist[curr] == ’/’) {curr++;return NULL;}BinNode<Elem>* temp =new BinNode<Elem>(inlist[curr++], NULL, NULL);if (inlist[curr] == ’\’’) return temp;45curr++ // Eat the internal node mark.temp->left = converthelp(inlist, curr);temp->right = converthelp(inlist, curr);return temp;}(c) // Use a helper function with a pass-by-reference// variable to indicate current position in the// node list.template <class Elem>GTNode<Elem>* convert(char* inlist) {int curr = 0;return converthelp(inlist, curr);}// As converthelp processes the node list, curr is// incremented appropriately.template <class Elem>GTNode<Elem>* converthelp(char* inlist,int& curr) {if (inlist[curr] == ’)’) {curr++;return NULL;}GTNode<Elem>* temp =new GTNode<Elem>(inlist[curr++]);if (curr == ’)’) {temp->insert_first(NULL);return temp;}temp->insert_first(converthelp(inlist, curr));while (curr != ’)’)temp->insert_next(converthelp(inlist, curr));curr++;return temp;}6.17 The Huffman tree is a full binary tree. To decode, we do not need to know the weights of nodes, only the letter values stored in the leaf nodes. Thus, we can use a coding much like that of Equation 6.2, storing only a bit mark for internal nodes, and a bit mark and letter value for leaf nodes.7Internal Sorting7.1 Base Case: For the list of one element, the double loop is not executed and the list is not processed. Thus, the list of one element remains unaltered and is sorted.Induction Hypothesis: Assume that the list of n elements is sorted correctlyby Insertion Sort.Induction Step: The list of n + 1 elements is processed by first sorting thetop n elements. By the induction hypothesis, this is done correctly. The final pass of the outer for loop will process the last element (call it X). This isdone by the inner for loop, which moves X up the list until a value smallerthan that of X is encountered. At this point, X has been properly insertedinto the sorted list, leaving the entire collection of n + 1 elements correctly sorted. Thus, by the principle of Mathematical Induction, the theorem is correct.7.2 void StackSort(AStack<int>& IN) {AStack<int> Temp1, Temp2;while (!IN.isEmpty()) // Transfer to another stackTemp1.push(IN.pop());IN.push(Temp1.pop()); // Put back one elementwhile (!Temp1.isEmpty()) { // Process rest of elemswhile (IN.top() > Temp1.top()) // Find elem’s placeTemp2.push(IN.pop());IN.push(Temp1.pop()); // Put the element inwhile (!Temp2.isEmpty()) // Put the rest backIN.push(Temp2.pop());}}46477.3 The revised algorithm will work correctly, and its asymptotic complexity will remain Θ(n2). However, it will do about twice as many comparisons, since it will compare adjacent elements within the portion of the list already knownto be sorted. These additional comparisons are unproductive.7.4 While binary search will find the proper place to locate the next element, it will still be necessary to move the intervening elements down one position in the array. This requires the same number of operations as a sequential search. However, it does reduce the number of element/element comparisons, and may be somewhat faster by a constant factor since shifting several elements may be more efficient than an equal number of swap operations.7.5 (a) template <class Elem, class Comp>void selsort(Elem A[], int n) { // Selection Sortfor (int i=0; i<n-1; i++) { // Select i’th recordint lowindex = i; // Remember its indexfor (int j=n-1; j>i; j--) // Find least valueif (Comp::lt(A[j], A[lowindex]))lowindex = j; // Put it in placeif (i != lowindex) // Add check for exerciseswap(A, i, lowindex);}}(b) There is unlikely to be much improvement; more likely the algorithmwill slow down. This is because the time spent checking (n times) isunlikely to save enough swaps to make up.(c) Try it and see!7.6 • Insertion Sort is stable. A swap is done only if the lower element’svalue is LESS.• Bubble Sort is stable. A swap is done only if the lower element’s valueis LESS.• Selection Sort is NOT stable. The new low value is set only if it isactually less than the previous one, but the direction of the search isfrom the bottom of the array. The algorithm will be stable if “less than”in the check becomes “less than or equal to” for selecting the low key position.• Shell Sort is NOT stable. The sublist sorts are done independently, andit is quite possible to swap an element in one sublist ahead of its equalvalue in another sublist. Once they are in the same sublist, they willretain this (incorrect) relationship.• Quick-sort is NOT stable. After selecting the pivot, it is swapped withthe last element. This action can easily put equal records out of place.48 Chap. 7 Internal Sorting• Conceptually (in particular, the linked list version) Mergesort is stable.The array implementations are NOT stable, since, given that the sublistsare stable, the merge operation will pick the element from the lower listbefore the upper list if they are equal. This is easily modified to replace“less than” with “less than or equal to.”• Heapsort is NOT stable. Elements in separate sides of the heap are processed independently, and could easily become out of relative order.• Binsort is stable. Equal values that come later are appended to the list.• Radix Sort is stable. While the processing is from bottom to top, thebins are also filled from bottom to top, preserving relative order.7.7 In the worst case, the stack can store n records. This can be cut to log n in the worst case by putting the larger partition on FIRST, followed by the smaller. Thus, the smaller will be processed first, cutting the size of the next stacked partition by at least half.7.8 Here is how I derived a permutation that will give the desired (worst-case) behavior:a b c 0 d e f g First, put 0 in pivot index (0+7/2),assign labels to the other positionsa b c g d e f 0 First swap0 b c g d e f a End of first partition pass0 b c g 1 e f a Set d = 1, it is in pivot index (1+7/2)0 b c g a e f 1 First swap0 1 c g a e f b End of partition pass0 1 c g 2 e f b Set a = 2, it is in pivot index (2+7/2)0 1 c g b e f 2 First swap0 1 2 g b e f c End of partition pass0 1 2 g b 3 f c Set e = 3, it is in pivot index (3+7/2)0 1 2 g b c f 3 First swap0 1 2 3 b c f g End of partition pass0 1 2 3 b 4 f g Set c = 4, it is in pivot index (4+7/2)0 1 2 3 b g f 4 First swap0 1 2 3 4 g f b End of partition pass0 1 2 3 4 g 5 b Set f = 5, it is in pivot index (5+7/2)0 1 2 3 4 g b 5 First swap0 1 2 3 4 5 b g End of partition pass0 1 2 3 4 5 6 g Set b = 6, it is in pivot index (6+7/2)0 1 2 3 4 5 g 6 First swap0 1 2 3 4 5 6 g End of parition pass0 1 2 3 4 5 6 7 Set g = 7.Plugging the variable assignments into the original permutation yields:492 6 4 0 13 5 77.9 (a) Each call to qsort costs Θ(i log i). Thus, the total cost isni=1i log i = Θ(n2 log n).(b) Each call to qsort costs Θ(n log n) for length(L) = n, so the totalcost is Θ(n2 log n).7.10 All that we need to do is redefine the comparison test to use strcmp. The quicksort algorithm itself need not change. This is the advantage of paramerizing the comparator.7.11 For n = 1000, n2 = 1, 000, 000, n1.5 = 1000 ∗√1000 ≈ 32, 000, andn log n ≈ 10, 000. So, the constant factor for Shellsort can be anything less than about 32 times that of Insertion Sort for Shellsort to be faster. The constant factor for Shellsort can be anything less than about 100 times thatof Insertion Sort for Quicksort to be faster.7.12 (a) The worst case occurs when all of the sublists are of size 1, except for one list of size i − k + 1. If this happens on each call to SPLITk, thenthe total cost of the algorithm will be Θ(n2).(b) In the average case, the lists are split into k sublists of roughly equal length. Thus, the total cost is Θ(n logk n).7.13 (This question comes from Rawlins.) Assume that all nuts and all bolts havea partner. We use two arrays N[1..n] and B[1..n] to represent nuts and bolts. Algorithm 1Using merge-sort to solve this problem.First, split the input into n/2 sub-lists such that each sub-list contains twonuts and two bolts. Then sort each sub-lists. We could well come up with apair of nuts that are both smaller than either of a pair of bolts. In that case,all you can know is something like:N1, N2。

算法设计与分析(第2版) 郑宗汉 第1章-1

算法设计与分析(第2版) 郑宗汉 第1章-1

8
Байду номын сангаас
学习要求
深刻理解每一类算法的思想及其实现
能熟练运用所学知识解决实际问题
培养提高计算思维能力
9
考核方式
Homework and Reading: 20%
Final Exam (Written Test): 80%
10
第1章 算法的基本概念
1.1 引言
1.1.1 算法的定义和特性
c %3 0
(1.1.3)
16
1.1.2 算法的设计和复杂性分析
百鸡问题的穷举法
输入:所购买的3种鸡的总数目n 输出:满足问题的解的数目k,公鸡,母鸡,小鸡的只数g[],m[],s[]
1. void chicken_question(int n, int &k, int g[], int m[], int s[]) 2. { 3. int a,b,c; 分析发现:只能买到n/5 4. k = 0; 只公鸡,n/3只母鸡,将 5. for (a = 0; a <= n; a++) { 算法进行改进。 6. for ( b = 0; b <= n; b++) { 7. for (c = 0; c <= n; c++) { 8. if ((a + b + c == n) && (5 * a + 3 * b + c / 3 == n) && (c%3 == 0)) { 9. g[k] = a; 10. m[k] = b; 11. s[k] = c; 12. k++; 13. } 14. } 15. } 16. } 17. }

本科专业认证《程序设计、算法与数据结构(一)》教学大纲

本科专业认证《程序设计、算法与数据结构(一)》教学大纲

《程序设计、算法与数据结构(一)》教学大纲课程编号:0812000217课程名称:程序设计、算法与数据结构(一)英文名称:Programming,Algorithm and Data Structure I学分:3 课程性质:必修总学时:48 其中,讲授48学时,实验0学时,上机0学时,实训0学时适用专业:网络工程建议开设学期: 1先修课程:无开课单位:计算机与通信工程学院一、课程简介《程序设计、算法与数据结构(一)》是计算机科学与技术、软件工程、网络工程、通信工程专业基础课程,是课程群的启蒙课,也是学生进入大学后的第一门程序设计类课程,其目的是以C语言程序设计为基础,使学生熟悉C程序设计的基本语法,通过大量的编程练习,引导学生进入程序设计的殿堂,培养学生基本的数据结构和算法分析能力,为后续课程的学习打下基础。

二、课程目标与毕业要求依据2017培养方案中的毕业要求,考虑本课程与专业毕业要求的支撑关系,制定本课程学习目标。

课程目标1:通过程序三种基本控制结构,函数等知识点的学习,要求学生掌握结构化程序设计的基本思想,深入领会自顶向下、逐步求精的设计方法,识别网络工程项目的设计与开发过程中功能模块划分的问题。

(支持毕业要求 2.1能运用数学、自然科学及网络工程的基本原理,识别和判断网络工程问题的关键环节。

)课程目标2:在程序设计C语言后阶段学习过程中,针对成绩管理信息系统大作业的要求,将同学分组了解系统功能与应用背景,对具体的开发任务进行分工联调并编程实现。

通过系统实现强化个体的角色意识和团队意识。

(支撑毕业要求9.1:能够理解多学科背景下的团队中每个角色的定位与责任,具有团队合作意识,能够胜任个体、团队成员的角色任务。

)课程目标3:通过学习标准的C语言程序设计语法,运用函数、线性表、字符串、链表等基本知识,通过学习算法的描述方法,使学生能将实际问题转换成计算机描述的算法问题,培养学生运用程序算法的描述方法进行交流的能力。

2024版《数据结构》全套课件

2024版《数据结构》全套课件

将电路中的元件和连线抽象为图中的顶点和 边,利用图算法进行电路分析和优化。
路由算法
生物信息学
利用图数据结构表示计算机网络中的拓扑结 构,利用最短路径算法进行路网络、 基因调控网络等复杂生物系统,进行生物信 息学分析和挖掘。
05
查找与排序
查找的基本概念与分类
选择排序算法
简单选择排序
每次从待排序的数据元素中选出最小(或最大)的一个 元素,存放在序列的起始位置,直到全部待排序的数据 元素排完。
堆排序
利用堆这种数据结构所设计的一种排序算法,是选择排 序的一种。可以利用数组来模拟堆的结构,通过构造大 顶堆或小顶堆来实现排序。
归并排序算法
归并排序的思想
将两个(或更多)有序表合并成一个新的有序表,即把 待排序序列分为若干个子序列,每个子序列是有序的。 然后再把有序子序列合并为整体有序序列。
开放寻址法、链地址法等。
排序的基本概念与分类
排序的定义
将一组无序的记录序列调整为有序的记录序 列。
排序的分类
内部排序和外部排序,内部排序包括插入排 序、交换排序、选择排序、归并排序等。
插入排序算法
要点一
直接插入排序
每次将一个待排序的元素插入到前面已经排好序的序列中, 寻找合适的位置。
要点二
希尔排序
二叉树的遍历算法
先序遍历
先访问根节点,然后遍 历左子树,最后遍历右
子树。
中序遍历
先遍历左子树,然后访 问根节点,最后遍历右
子树。
后序遍历
层次遍历
先遍历左子树,然后遍 历右子树,最后访问根
节点。
按照层次顺序从上到下、 从左到右遍历二叉树中
的所有节点。
树和森林的遍历算法

数据结构与算法 Data Structures and Algorithms

数据结构与算法 Data Structures and Algorithms

高级数据结构和算法分析Advanced Data Structures and Algorithm Analysis主讲教师:陈越Instructor: CHEN, YUEE-mail: chenyue@ Courseware and homework sets can be downloaded from /dsaa/教材(Text Book)Data Structures andAlgorithm Analysis in C(2nd Edition)Mark Allen Weiss陈越改编Email: weiss@参考书目(Reference)数据结构与算法分析(C语言版)魏宝刚、陈越、王申康编著浙江大学出版社 Data Structures, Algorithms, and Applications in C++数据结构算法与应用——C++语言描述(英文版)Sartaj Sahni McGraw-Hill & 机械工业出版社 数据结构课程设计何钦铭、冯雁、陈越著浙江大学出版社课程评分方法(Grading Policies)Research Project (23 or 25)Discussions(14)Homework (5)Q&A (0.5 each)Total 45Final Exam (55)Discussions(14)Form groups of 3 or 428 in-class discussion topicsEach takes 3~5 minutes14 = 28 10 / 20Research topics (23 or 25)◆Done in groups◆16 topics to choose from◆Report (18 or 20 points)◆In-class presentation (5~10 minutes, 5 points)◆The speaker will be chosen randomly from all thecontributors◆If there are many volunteers, only one group willbe chosen◆If there is no volunteer, I will talk about itHomework (5)✓Done independently✓10 problems✓Collected before the end of the next class meeting ✓ 5 = 10 10 / 20✓Late penalty: 2 points/weekQ&A⏹For volunteers only⏹0.5 point for each question asked/answered ⏹come and claim your credits after each classsession。

《数据结构与算法实验》课程教学大纲

《数据结构与算法实验》课程教学大纲
/
3
实验
/
/
四、考核方式
序号
考核环节
操作细节
总评占比
1
实验
1.本课程36个学时实验,共12次实验。
2.成绩采用百分制,根据实验完成情况评分。
3.考核学生对数据结构与算法知识的应用能力,针对12个独立的问题,能够根据题目功能和性能要求确定设计目标,从技术角度优选解决方案获得有效结果。
90%
2
考勤
随机点名、刷卡点名等
实验
/
/
11
第11题
11.任务调度(Schedule)
根据初始优先级设置,按照调度原则,预测一批计算任务的执行序列。
/
3
实验
/
/
12
第12题
12.循环移位(Cycle)
所谓循环移位是指。一个字符串的首字母移到末尾,其他字符的次序保持不变。比如ABCD经过一次循环移位后变成BCDA
给定两个字符串,判断它们是不是可以通过若干次循环移位得到彼此
/
3
实验
/
/
10
第10题
10.玩具(Toy)
ZC神自小就是这方面的天才,他往往是一只手还没揩干鼻涕,另一只手已经迅速地将处于任意状态的玩具复原至如图(a)所示的初始状态。物质极其匮乏的当年,ZC神只有一个这样的玩具;物质极大丰富的今天,你已拥有多个处于不同状态的玩具。现在,就请将它们全部复原吧。
/
3
《数据结构与算法实验》教学大纲
一、课程基本信息
课程名称
数据结构与算法实验
Data Structure and Algorithm Experiment
课程编码
CST310411015

数据结构与算法 (2)

数据结构与算法 (2)

First started programming : have no ADT
–Writing the same code over and over
Data Sagtrauincture can be defined as:Exa•mAtpolemfiocr DADatTa: tahreecsoidneglteo arenadd the 1ea.Ak♠stcheWehcayAtebotoon•adohcmkAfAdwsoaafoanwtebrntwndinoadoatc-ihwha,ndmaiblsttatfiyetbauewcaihirspcstncahtheipaeotcdotaordeintimooaotniscforadnpcanofpnatoeotentaosdtprhidfriostmasaaetaheiatmystcahetlibopatokeidisrtlpcaenymedtatpogyeswedieepnesrdntaonanedhcadtttthaaapiiitesttsoneotayafai.ngaardndntresoseaeddt,,dthaaaaet.ar
bank toopdeertaetrimoninse tellers.
1-3 Model for an Abstract Data Type
In this section we provide a conceptual model for an Abstract Data Type (ADT).
t2rienyel.eApvwl–l♠adb3aiasqmoAene2raittsuehlbit7tevoeeo1ewa––s6{s•idauniut6rxdnFAt+Aens8mT123rneitetaboesuoasra,eegsn~s...th:r-mtinncefmhnanp,tti3DEDtaon*stcatoatyaeioipci,2ihtotpntgfeectc/ncisihotls,o…o7nnmeceuasccohiisienlfdmnn:6mtailliil(pneeaaeoscos}sgasg7ppruikrdhegrra:ioecfcattf}ssraaelaurhr,enadiaomintuiCcshmdudmttaalitsetrsaiipdagtleauooaatdtcfyabfrl:opncieetoieoiirannnnbtsaptvootatlioadmarrhaefuilgntte-ieotonnmoozeambinstnfrette.pntfhaisffheyhuetediwobdesto{tenrapdeo)eyoim-lenoneouasdgnbeapptfdtinfohricbamistotte.riedtainhahapaotteneotr.ipauthtkeaoetiagfrkathTltserndesretoaedlaeiahtsmniroapnyna.te,taenitepeatosanriasneusannod.t.ctfatsiohoatinoasnssa

《数据结构与算法 》课件

《数据结构与算法 》课件
自然语言处理
自然语言处理中,数据结构用于表示句子、单词之间的关系,如依 存句法树。
计算机视觉
计算机视觉中的图像处理和识别使用数据结构来存储和操作图像信 息,如链表和二叉树。
算法在计算机科学中的应用
加密算法
加密算法用于保护数据的机密性和完整性,如 RSA算法用于公钥加密。
排序算法
排序算法用于对数据进行排序,如快速排序和归 并排序广泛应用于数据库和搜索引擎中。
归并排序
将两个或两个以上的有序表组合成一个新的有序表。
查找算法
线性查找:从数据结构的一端开始逐 个检查每个元素,直到找到所查找的 元素或检查完所有元素为止。
二分查找:在有序数据结构中查找某 一特定元素,从中间开始比较,如果 中间元素正好是要查找的元素,则搜 索过程结束;如果某一特定元素大于 或者小于中间元素,则在数组大于或 小于中间元素的那一半中查找,而且 跟开始一样从中间元素开始比较。如 果在某一步骤数组为空,则代表找不 到。这种搜索算法每一次比较都使搜 索范围缩小一半。
04
常见算法实现
排序算法
冒泡排序
通过重复地遍历待排序的数列,一次比较两个元素,如果他们的顺序错误就把他们交换过来。遍历数列的工作是重复 地进行直到没有再需要交换,也就是说该数列已经排序完成。
快速排序
通过一趟排序将要排序的数据分割成独立的两部分,其中一部分的所有数据都比另一部分的所有数据要小,然后再按 此方法对这两部分数据分别进行快速排序,整个排序过程可以递归进行,以此达到整个数据变成有序序列。
数据结构在计算机科学中的应用
1 2
数据库系统
数据结构是数据库系统的基础,用于存储、检索 和管理大量数据。例如,B树和哈希表在数据库 索引中广泛应用。

数据结构第7章-答案

数据结构第7章-答案

一、单选题C01、在一个图中,所有顶点的度数之和等于图的边数的倍。

A)1/2 B)1 C)2 D)4B02、在一个有向图中,所有顶点的入度之和等于所有顶点的出度之和的倍。

A)1/2 B)1 C)2 D)4B03、有8个结点的无向图最多有条边。

A)14 B)28 C)56 D)112C04、有8个结点的无向连通图最少有条边。

A)5 B)6 C)7 D)8C05、有8个结点的有向完全图有条边。

A)14 B)28 C)56 D)112B06、用邻接表表示图进行广度优先遍历时,通常是采用来实现算法的。

A)栈 B)队列 C)树 D)图A07、用邻接表表示图进行深度优先遍历时,通常是采用来实现算法的。

A)栈 B)队列 C)树 D)图A08、一个含n个顶点和e条弧的有向图以邻接矩阵表示法为存储结构,则计算该有向图中某个顶点出度的时间复杂度为。

A)O(n) B)O(e) C)O(n+e) D)O(n2)C09、已知图的邻接矩阵,根据算法思想,则从顶点0出发按深度优先遍历的结点序列是。

A)0 2 4 3 1 5 6 B)0 1 3 6 5 4 2 C)0 1 3 4 2 5 6 D)0 3 6 1 5 4 2B10、已知图的邻接矩阵同上题,根据算法,则从顶点0出发,按广度优先遍历的结点序列是。

A)0 2 4 3 6 5 1 B)0 1 2 3 4 6 5 C)0 4 2 3 1 5 6 D)0 1 3 4 2 5 6D11、已知图的邻接表如下所示,根据算法,则从顶点0出发按深度优先遍历的结点序列是。

A)0 1 3 2 B)0 2 3 1 C)0 3 2 1 D)0 1 2 3A12、已知图的邻接表如下所示,根据算法,则从顶点0出发按广度优先遍历的结点序列是。

A)0 3 2 1 B)0 1 2 3 C)0 1 3 2 D)0 3 1 2A13、图的深度优先遍历类似于二叉树的。

A)先序遍历 B)中序遍历 C)后序遍历 D)层次遍历D14、图的广度优先遍历类似于二叉树的。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Last Section•ftp://211.71.1.204•download-ben •Password: studentProbabilistic Algorithm •NP –Completeness•Exact algorithm vs. Heuristics •Deterministic vs. Probabilistic •Approximation algorithmApproximation Algorithm•Bisection method•Position method•Newton’s method•Difference Bound•Performance ratio•TSP (Nearest-neighbor algorithm)Randomized Algorithms •Finding a Good Student •Randomized Quicksort•Testing String Equality •Verification of Matrix ProductsVerification of Matrix ProductsInput: Three n×n matrices,A, B and C.Question: Does A×B = C?Verification of Matrix Products Deterministic algorithms:Trivial Algorithms: O(n3)Strassen: O(n2.81)Best deterministic algorithm: O(n2.376)Verification of Matrix Products Freivalds' AlgorithmAB=C=•Compute the sum of a random subset of the rows of both AB and C.•If the sums are different, return false (AB ≠C).•Otherwise, either return true or try again to improve confidence.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡−−202112201⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡−−201112201Verification of Matrix Products •The sum of any set of rows of a matrix may be computed by multiplying on the left by a row vector of which each element is either 0 or 1, e.g[1 0 1]×= [1 0 2]+[2 0 2]⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡−−202112201Verification of Matrix Products •Note that S×(A×B)= (S×A)×B. To compute (S×A)×B, where S is a row vector (i.e. 1×n matrix):Y = S×A requires O(n2) time; andY×B requires O(n2) time (because Y is 1×n).•We may therefore compute the sum of any subset of rows of A×B in O(n2) time.Verification of Matrix Products •Now consider the following O(n2) time probabilistic algorithm (Monte-Carlo)Repeat k times {Generate a random row vector S from {0, 1};If (SA)B ≠SC, then return AB≠C;}Return AB=C.Complexity analysis: O(n2)Verification of Matrix Productsz Suppose there is an error on row i of C, i.e. row i of C does not equal row i of AB.z We make no assumptions about the other rows of C.z Also, assume that we select any subset of the n rows with equal probability.Verification of Matrix Productsz There are 2n subsets of the n rows. For example, if n=3, then we have 8 subsets, i.e. (1, 2, 3), (1, 2), (1, 3), (2, 3), (1), (2), (3), and the empty set.z We form 2n-1pairs of subsets so that in each pair, the subsets are the same except that one subset contains row i and the other does not. For example, if n=3 and i=2, then we have 4 pairs, i.e. {(1, 2, 3), (1, 3)}, {(1, 2), (1)}, {(2, 3), (3)} and {(2), the empty set}Verification of Matrix Productsz Consider any pair (S, S’), where S contains i, but S’does not.z Suppose we represent S and S’by row vectors. Then (SAB −SC) −(S’AB −S’C)= S(AB −C) −S’(AB −C)= (S −S’)(AB −C)≠0z Hence, at most one of the following is true:SAB = SC or S’AB = S’CVerification of Matrix Productsz For at most 2n-1of the 2n subsets, the sums of the selected rows of AB and C will be equal.z If C ≠AB, then a single trial will fail to detect an error with probability at most 1/2.z If k trials are done, the algorithm give a wrong result with probability at most 1/2k.z For example, if k=10, then the error probability is at most 0.1%.Pseudo Random Number Generator •计算机是一种精确的,并能准确做出判断的机器,要用来产生完全随机的数列,是自相矛盾的。

•故需要采用一些特殊的方法,生成一些满足独立性,均匀性,随机性,周期长,且计算速度快,占用内存小的一些伪随机序列。

Linear CongruentialGenerator •线性同余生成器利用求余运算,来获得一个伪随机序列:x 0=d (seed)a,b,d 为正整数,m 应充分大))(mod (1m b ax x i i +=−Multiplicative Congruential Generator z Most random number generators generate a sequence of integers by the following recurrence:X 0= a given integer (seed), X i +1= a X i (mod M)For example, for X 0=1, a =5, M=13, we haveX 1=5 mod 13 = 5, X 2= 25 mod 13 = 12,X 3= 60 mod 13 = 8.Each integer in the sequence lies in the range [0, M -1].z Random numbers r i in the range [0,1] is obtained byr i = X i /MMultiplicative Congruential Generator z The most famous generator (Park & Miller)haveM=231-1 = 2147483647, a= 16807.z Schrage solved the difficulty of overflow by using a factorization (因数分解) of M, i.e.M = aq+ r, i.e., q= ⎣M/a⎦and r= M mod a = M –aq. Thena X mod M=a×(X mod q) -r×⎣X/ q⎦if ≥0= a×(X mod q) -r×⎣X/ q⎦+M otherwiseMultiplicative Congruential GeneratorFor the most famous generator, we haveq= ⎣2147483647/16807⎦= 127773 and r= 2836. Note:A random number between m and n(m≤n)can be obtained byX=m+(n-m) ×r,where r is a random number between 0 and 1.Lower Bound ArgumentsLower Boundsz Definition: A lower bound of a problem is the least amount of computation that any algorithm must use to solve this problem.z Lower bounds are generally theoretically derived. z The lower bound for a problem is not unique,but the higher the better.9For example, Ω(1), Ω(n) and Ω(n log n) are all lower bounds for sorting but among them Ω(n log n) is the best.Lower BoundsAt present, if the best lower bound of a problem is Ω(n) and the time complexity of the best algorithm is O(n2). This means that9we may try to find a better lower bound, e.g. Ω(n log n);9we may try to find a better algorithm, e.g. O(n log n);9both of the lower bound and the algorithm may be improved.z If the present lower bound is Ω(n log n) and there is an algorithm with time complexity O(n log n), then the algorithm is optimal.Lower Bound Arguments •Trivial Lower Bounds•Information-Theoretic Arguments •Adversary Arguments•Problem ReductionLower Bound on Sortingz Comparison sort: A family of algorithms that use comparisons to determine the sorted order of a list.z Examples: Mergesort, Quicksortz Decision tree = A tree that describes the comparisons required to sort a list of data.Decision Treesz Given n items there are n! different ways to arrange them (permutation (排列)).z Only one of the permutations is in sorted order. z The aim of sorting is to discover the sorted permutation.z The worst case running time corresponds to the longest path from the root to a leaf.The power of comparisonz Given n, we have M = n! permutations.z Each permutation must appear as one of the leaves in the decision tree.z Every time we perform a comparison, one of two branches eliminates (排除) at most half of the permutations.The power of comparisonz Thus, if sorting is done along the longest path,M comparisons.we will use at least k= log2z Another proof is that since any binary tree of height k has at most 2k leaves, we have 2k≥n!. z The best general sorting algorithm requires (n!) comparisons in the worst case.log2How much is log 2(n !)?Method 1z log 2(n !) = log 2(n (n −1) (n −2) …(2) (1))= log 2n +log 2(n −1)+log 2(n −2)+ …+log2(2)+log 2(1)≥log 2n +log 2(n −1)+log 2(n −2)+ …+log2(n /2)≥n /2 log 2(n /2) = n /2 (log2n –1) = Ω(n log n )Comparison-based Sorting Theorem 1. Any comparison-based sorting algorithm must use Ω(n log n) comparisons for sorting an array of n elements in the worst case.z This means that it is impossible to perform sorting using O(n) or O(log n) or O(n log(log n)) comparisons.Determining The Maximum Theorem 2. Every comparison-based algorithm for determining the maximum of a set of n elements must use at least n/2 comparisons.•Proof. Every element must participate in at least one comparison; if not, the uncompared element can be chosen to be the maximum. Each comparison compares 2 elements. Hence, at least n/2 comparisons must be made.Determining The Maximum Theorem 3.Every comparison-based algorithm for determining the maximum of a set of n elements must use at least n−1 comparisons.•Proof.To say that a given element, x, is the maximum, implies that every other element has lost at least one comparison with another element (not necessarily x). Each comparison produces at most one loser. Hence at least n−1 comparisons must be used.Lower Bound for Finding Max and Min •Remember: we can compute both the maximum and minimum of a list of n numbers, using (3/2)n−2 comparisons. We will prove that this solution is optimal.•Here is a simpler algorithm: Compare x[1] with x[2], x[3] with x[4], etc. We get n/2 maxima and n/2 minima. Find the maximum on the n/2 maxima with n/2−1 comparisons, and the same for the minima. The total cost: n/2+n/2−1+n/2−1 = 3n/2−2 (when n even).Lower Bound for Finding Max and Min Note that x is the minimum and y is the maximum only ifz every element other than x has won at least one comparison andz every element other than y has lost at least one comparison.Lower Bound for Finding Max and Min •Call a win “W”, a loss “L”. For each element, we labelit according to the following 4 cases:z N: never compared with anybody;z W: won all comparisons;z L: lost all comparisons;z WL: won at least one and lost at least one.•The algorithm must assign n−1 W's and n−1 L's. i.e., it MUST need 2n−2 “units of information”.•More precisely, n-2 WL's, 1 W and 1 L.Lower Bound for Finding Max and Min z To prove that the worst case can happen, we will use what is called an adversary (反对者) argument (论证、论据). An adversary is an opponent (对手) that the comparison algorithm plays against.z Its ultimate (最终) goal is to maximize the number of comparisons that the algorithm makes (minimize the amount of information obtained) by constructing an input to the problem.Lower Bound for Finding Max and Min z The adversary tentatively (暂时地) assigns values tothe elements, which may change over time.z However, they can only change in a fashion CONSISTENT (一致的) with previous answers.z That is, an element labeled L may only DECREASE in value (since it has lost all previous comparisons, if it is decreased, it will still lose all of them), and an element labeled W may only INCREASE in value.An element labeled WL cannot change.Lower Bound for Finding Max and Min Theorem 4. Every comparison-base method for determining both the maximum and minimum of a set of n numbers must use at least (3n/2)−2 comparisons, in the worst case.•Proof.Assume n is even. As we have seen above, the algorithm must learn 2n−2 units of information. The most it can learn in one comparison is 2 units; but this can only happen in the case (N,N), in which both elements have never participated before in a comparison. This can only happen at most n/2 times. To learn the remaining n−2 units of information, the algorithm must perform n−2 comparisons in the worst case. The total number of comparisons is therefore at least n/2 + n−2 = (3/2)n−2.End of Section。

相关文档
最新文档