?? readme
字號:
This is the README for lk-0.5.0See BUGS and ChangeLog for a summary of recent changes, including twobug fixes since the last public release (0.4.17).The home site of this package is http://www.cs.utoronto.ca/~neto/research/lkEnjoy,David Neto, netod@acm.org, 2000/9/17Contents======== - Introduction - Making the programs - Why a new implementation? - Tools - Algorithms - Performance - Rough bits - Copyrights and Licenses - ReferencesIntroduction============Program LK implements the Lin-Kernighan heuristic for the symmetrictraveling salesman problem (TSP).See the famous 1973 paper by Shen Lin and Brian Kernighan for the firstdescription and motivation of this algorithm. (References appear below.) A more modern source is the chapter by Johnson and McGeoch; it describesthe Lin-Kernighan algorithm in the context of other local searchalgorithms for the TSP.This program was written by David Neto for his doctoral work in theDepartment of Computer Science at the University of Toronto. It mostlyfollows the design outline given by Johnson and McGeoch for theimplementation due to Johnson, Bentley, McGeoch and Rothberg (JBMR),and is tuned for two-dimensional Euclidean instances. I successfully defended my PhD on May 25, 2000. The thesistitle is ``Efficient Cluster Compensation for Lin-Kernighan Heuristics''.Find out more by visiting http://www.cs.utoronto.ca/~neto/research/thesis.htmlThe thesis is now available online in PostScript form.This program introduces ``efficient cluster compensation'', analgorithmic technique intended to make the Lin-Kernighan heuristicmore robust in the face of clustered data.Mail suggestions and bug reports for LK to netod@acm.org. Please includethe version of `lk', which you can get by running `lk --version'.If pertinent, include a pointer to sample inputs.Programs jitter, shake, tspgen, are TSP instance generators. These others are the subject of active research.Program unifd generates 2-d instances with uniform distribution over a square.Making the programs===================See the file INSTALL for compilation instructions. In a nutshell: ./configure make (cd script; make myall) (cd src; ../script/lkdoitcmp)You'll probably want to execute the ../script/lkdoitcmp script whileinside the src directory. That creates two binaries: lk.no_d (baseline Lin-Kernighan without cluster compensation), and lk.deg (lk with cluster compensation)You can tell the which version is which by executing the binary with--version. The cluster compensation will have a 'deg' at the end ofits version number, e.g. LK 0.4.20degScripts for munging data and running experiments are in the script directory,and the data directory, and in this directory. Buyer beware. :)The expt directory contains scripts for script/expt.pl that run a batteryof experiments. The expt directory also contains a horribly convoluted makefile for processing the output logs of the experiments.Once both binaries for lk are built, try this inside the expt directory: ../script/expt.pl expttest make tsp.lin105..i1..cf.plt make tsp.lin105..i1..probe.plt make tsp.lin105..i1..move.plt # Shows run by run comparison with and without cluster compensation. gnuplot tsp.lin105..i1..cf.plt # Show probe depths of lambda-changes (how long does the t list grow, # and how often gnuplot tsp.lin105..i1..probe.plt # Show move depths of improving lambda-changes (i.e. that we commit to) gnuplot tsp.lin105..i1..move.plt If you want to try a more extensive set of experiments, try expttsp.script/TSP.pm is a start at a Perl module that should mostly encapsulatemunging of TSPLIB files. For example, see script/tsplib2rothberg.pl.Why a new implementation?=========================1. "You never really know something until you teach it to a computer." -- Donald Knuth (paraphrase)2. When I started writing this program, there were no freely availableimplementations of the Lin-Kernighan heuristic.3. I wanted to instrument the program so I could learn more about thebehaviour of the algorithm. Writing it myself seemed a natural wayto know where and what to instrument.4. I originally wanted to run the code in parallel. Also, I chose theperformance-oriented implementation language C. Both factors meant Ineeded to architect the code in a particular way. (Now I don't careabout parallelizing the code.)5. I've written (most of) the code in a "literate style". I've used theCWEB suite of literate programming tools. I'm convinced literateprogramming is the way to go for experimental and expositoryprogramming.6. Free software is a good thing. By "free" I mean free toredistribute and modify. (See http://www.debian.org/intro/free.) I wanted to be able to distribute the code under the GNU GPL or the GNULGPL, so I needed to write a clean-room implementation.I do not resent other people for not releasing their code. I have thegreatest respect for other researchers in this field (see thereferences). They have already given a great deal in describing theirown work.Let me clarify this stance with two quotes:"My opinion on licenses is that `he who writes the code gets to choosethe license, and nobody else gets to complain.' Anybody complainingabout a copyright license is a whiner." -- Linus Torvalds."Implementation is the sincerest form of flattery." -- L. Peter Deutsch.Note:I just found out about the Applegate, Bixby, Chvatal, Cook source coderelase. I'm eager to try out their code.(See http://www.caam.rice.edu/~keck/concorde.html)But before I do, I'm releasing my own code, to establish the "cleanliness"of my code.Tools=====If you want to try the programs in this package, you will need a C compilerand make. The only non-standard C the program requires is the BSDresource usage functions (getrusage).If you want to munge the output and run experiments automatically,you will need Perl and GNU make (for expt/Makefile). If you wantto plot the munged output of experiments, you will need gnuplot.If you want to read the code, you will need CWEB and TeX to processthe .w files into .dvi files.If you want to develop the code, you will need CWEB (and make and a Ccompiler). Edit the .w files, not the .c files! I also stronglysuggest using RCS. You'll probably also want GNU autoconf and GNUautomake. Remaking dependencies can be avoided by first usingautomake --include-deps in the project root directory.Algorithms==========For the Lin-Kernighan heuristic for the TSP, I worked from Lin andKernighan's original paper and from the Johnson and McGeoch chapter.(See src/jbmr.w)Jon Bentley and Doug McIlroy developed the QuickSort variant coded asdsort() in file src/dsort.w. See their article "Engineering a SortFunction". I used it here because it is deterministic: I needrepeatability for my experiments. Apparently, the Solaris qsortfunction isn't always deterministic. (See src/dsort.w)Kd-trees for semi-dynamic point sets are Jon Bentley's creation.I've implemented 2-d trees only. 3-d and beyond shouldn't be hard.Of the queries, I've implemented only the nearest-neighbour queries,not the fixed-radius searches. (See src/kdtree.w)The TSP code uses the oriented tour ADT as described by Fredman et.al.I've implemented the ADT in terms of arrays and two-level trees.(See src/array.w, src/twolevel.w)I've also implemented a crude version of Held-Karp lower bounds forthe TSP. (See src/ascend.w)Program LK can also be used for approximately solving the minimumweight perfect matching problem. The details are simpler than forthe TSP. I don't know of anyone else applying Lin-Kernighan for weighted perfect matching, although Bentley used 2-opt for thisproblem. (See his paper on fast geometric algorithms for the TSP.He used a 2-opt matching algorithm as a component in his approximateChristofides algorithm.)(See src/match.w: it plays the role for matching that jbmr.w does for the TSP)I invented cluster compensation, and I think it's kind of neat.(See src/decluster.w)Fast least common ancestors in binary trees is implemented in decluster.w.I use the algorithm described by Schieber.The chaos game for iterated function systems played in ifs.w is describedin Michael Barnsley's _Fractals Everywhere_.Performance===========Program lk is a high quality implementation of the Lin-Kernighanheuristic. However, I wouldn't call it "state of the art". What do I mean?I consider the JBMR implementation to be the state of the artimplementation of the Lin-Kernighan heuristic for the TSP. I useit as the standard of comparison. In the latter stages of my doctoralresearch another group led by Bill Cook publicized their own extremelyscalable implementation of the Lin-Kernighan heuristic.(As I write, there is a DIMACS implementation challenge for the TSP,see http://www.research.att.com/~dsj/chtsp/)My implementation is "high quality" because it has similar algorithmicbehaviour as the JBMR implementation. For example, the average depthof the searches is about 3 or 4 beyond the backtracking phase, i.e. toabout t_12 or t_14. (If you know Lin-Kernighan, you'll know what thismeans). It routinely gets tours that are within 2% of optimal (orabove the Held-Karp lower bound). It can be run on million-cityuniform geometric instances in reasonable time and space.However, my implementation is slower than the JBMR implementation.In tests I ran a while back, my implementation was about twice asslow as the JBMR implementation. For example, in about 2 CPU hours itcan find near-optimal tours for million-city instances. (One SGIChallenge 150MHz MIPS R4400 processor; using about 250MB of RAM; amillion cities drawn randomly from a uniform distribution over the unitsquare; distance between cities is the Euclidean distance; within 2% ofoptimal, or 2.5% above the expected Held-Karp bound.)Rough bits==========LK is not finished. There are a number of things that can be improved.
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -