亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? lll.txt

?? 密碼大家Shoup寫的數論算法c語言實現
?? TXT
字號:
/**************************************************************************\MODULE: LLLSUMMARY:Routines are provided for lattice basis reduction, including bothexact-aritmetic variants (slow but sure) and floating-point variants(fast but only approximate).For an introduction to the basics of LLL reduction, see[H. Cohen, A Course in Computational Algebraic Number Theory, Springer, 1993].The LLL algorithm was introduced in [A. K. Lenstra, H. W. Lenstra, andL. Lovasz, Math. Ann. 261 (1982), 515-534].\**************************************************************************/#include <NTL/mat_ZZ.h>/**************************************************************************\                         Exact Arithmetic Variants\**************************************************************************/long LLL(ZZ& det2, mat_ZZ& B, long verbose = 0);long LLL(ZZ& det2, mat_ZZ& B, mat_ZZ& U, long verbose = 0);long LLL(ZZ& det2, mat_ZZ& B, long a, long b, long verbose = 0);long LLL(ZZ& det2, mat_ZZ& B, mat_ZZ& U, long a, long b,          long verbose = 0);// performs LLL reduction.// B is an m x n matrix, viewed as m rows of n-vectors.  m may be less// than, equal to, or greater than n, and the rows need not be// linearly independent.  B is transformed into an LLL-reduced basis,// and the return value is the rank r of B.  The first m-r rows of B// are zero.  // More specifically, elementary row transformations are performed on// B so that the non-zero rows of new-B form an LLL-reduced basis// for the lattice spanned by the rows of old-B.// The default reduction parameter is delta=3/4, which means// that the squared length of the first non-zero basis vector// is no more than 2^{r-1} times that of the shortest vector in// the lattice.// det2 is calculated as the *square* of the determinant// of the lattice---note that sqrt(det2) is in general an integer// only when r = n.// In the second version, U is set to the transformation matrix, so// that U is a unimodular m x m matrix with U * old-B = new-B.// Note that the first m-r rows of U form a basis (as a lattice)// for the kernel of old-B. // The third and fourth versions allow an arbitrary reduction// parameter delta=a/b, where 1/4 < a/b <= 1, where a and b are positive// integers.// For a basis reduced with parameter delta, the squared length// of the first non-zero basis vector is no more than // 1/(delta-1/4)^{r-1} times that of the shortest vector in the// lattice (see, e.g., the article by Schnorr and Euchner mentioned below).// The algorithm employed here is essentially the one in Cohen's book.long image(ZZ& det2, mat_ZZ& B, long verbose = 0);long image(ZZ& det2, mat_ZZ& B, mat_ZZ& U, long verbose = 0);// This computes the image of B using a "cheap" version of the LLL:// it performs the usual "size reduction", but it only swaps// vectors when linear dependencies are found.// I haven't seen this described in the literature, but it works // fairly well in practice, and can also easily be shown// to run in a reasonable amount of time with reasonably bounded// numbers.// As in the above LLL routines, the return value is the rank r of B, and the// first m-r rows will be zero.  U is a unimodular m x m matrix with // U * old-B = new-B.  det2 has the same meaning as above.// Note that the first m-r rows of U form a basis (as a lattice)// for the kernel of old-B. // This is a reasonably practical algorithm for computing kernels.// One can also apply image() to the kernel to get somewhat// shorter basis vectors for the kernels (there are no linear// dependencies, but the size reduction may anyway help).// For even shorter kernel basis vectors, on can apply// LLL(). /**************************************************************************\                   Floating Point VariantsThere are a number of floating point variants available:you can choose the precision, the orthogonalization strategy,and the reduction condition.The wide variety of choices may seem a bit bewildering.See below the discussion "How to choose?".*** Precision:  FP -- double  QP -- quad_float (quasi quadruple precision)        this is useful when roundoff errors can cause problems  XD -- xdouble (extended exponent doubles)        this is useful when numbers get too big  RR -- RR (arbitrary precision floating point)        this is useful for large precision and magnitudes  Generally speaking, the choice FP will be the fastest,  but may be prone to roundoff errors and/or overflow.  *** Orthogonalization Strategy:   -- Classical Gramm-Schmidt Orthogonalization.     This choice uses classical methods for computing     the Gramm-Schmidt othogonalization.     It is fast but prone to stability problems.     This strategy was first proposed by Schnorr and Euchner     [C. P. Schnorr and M. Euchner, Proc. Fundamentals of Computation Theory,      LNCS 529, pp. 68-85, 1991].       The version implemented here is substantially different, improving     both stability and performance.  -- Givens Orthogonalization.     This is a bit slower, but generally much more stable,     and is really the preferred orthogonalization strategy.     For a nice description of this, see Chapter 5 of       [G. Golub and C. van Loan, Matrix Computations, 3rd edition,     Johns Hopkins Univ. Press, 1996].*** Reduction Condition:  -- LLL: the classical LLL reduction condition.  -- BKZ: Block Korkin-Zolotarev reduction.     This is slower, but yields a higher-quality basis,     i.e., one with shorter vectors.     See the Schnorr-Euchner paper for a description of this.     This basically generalizes the LLL reduction condition     from blocks of size 2 to blocks of larger size.************* Calling Syntax for LLL routines ***************long [G_]LLL_{FP,QP,XD,RR} (mat_ZZ& B, [ mat_ZZ& U, ] double delta = 0.99,                        long deep = 0, LLLCheckFct check = 0, long verbose = 0);* The [ ... ] notation indicates something optional,  and the { ... } indicates something that is chosen from  among several alternatives.* The return value is the rank of B (but see below if check != 0).* The optional prefix G_ indicates that Givens rotations are to be used;  otherwise, classical Gramm-Schmidt is used.* The choice FP, QP, XD, RR determines the precision used.* If the optional parameter U is given, then U is computed  as the transition matrix:     U * old_B = new_B* The optional argument "delta" is the reduction parameter, and may  be set so that 0.50 <= delta < 1.  Setting it close to 1 yields  shorter vectors, and also improves the stability, but increases the  running time.  Recommended value: delta = 0.99.* The optional parameter "deep" can be set to any positive integer,  which allows "deep insertions" of row k into row i, provided i <=  deep or k-i <= deep.  Larger values of deep will usually yield  shorter vectors, but the running increases exponentially.    NOTE: use of "deep" is obsolete, and has been "deprecated".  It is recommended to use BKZ_FP to achieve higher-quality reductions.  Moreover, the Givens versions do not support "deep", and setting  deep != 0 will raise an error in this case.* The optional parameter "check" is a function that is invoked after  each size reduction with the current row as an argument.  If this  function returns a non-zero value, the LLL procedure is immediately  terminated.  Note that it is possible that some linear dependencies  remain undiscovered, so that the calculated rank value is in fact  too large.  In any case, zero rows discovered by the algorithm  will be placed at the beginning, as usual.  The check argument (if not zero) should be a routine taking  a const vec_ZZ& as an argument and return value of type long.  LLLCheckFct is defined via a typedef as:     typedef long (*LLLCheckFct)(const vec_ZZ&);  See the file subset.c for an example of the use of this feature.* The optional parameter "verbose" can be set to see all kinds of fun  things printed while the routine is executing.  A status report is  printed every once in a while, and the current basis is optionally  dumped to a file.  The behavior can be controlled with these global  variables:     extern char *LLLDumpFile;  // file to dump basis, 0 => no dump;                                 // initially 0     extern double LLLStatusInterval; // seconds between status reports                                       // initially 900s = 15min ************* Calling Syntax for BKZ routines ***************long [G_]BKZ_{FP,QP,QP1,XD,RR} (mat_ZZ& B, [ mat_ZZ& U, ] double delta=0.99,                          long BlockSize=10, long prune=0,                           LLLCheckFct check = 0, long verbose = 0);These functions are equivalent to the LLL routines above,except that Block Korkin-Zolotarev reduction is applied.We describe here only the differences in the calling syntax.* The optional parameter "BlockSize" specifies the size of the blocks  in the reduction.  High values yield shorter vectors, but the  running time increases exponentially with BlockSize.  BlockSize should be between 2 and the number of rows of B.* The optional parameter "prune" can be set to any positive number to  invoke the Volume Heuristic from [Schnorr and Horner, Eurocrypt  '95].  This can significantly reduce the running time, and hence  allow much bigger block size, but the quality of the reduction is  of course not as good in general.  Higher values of prune mean  better quality, and slower running time.    When prune == 0, pruning is disabled.  Recommended usage: for BlockSize >= 30, set 10 <= prune <= 15.* The QP1 variant uses quad_float precision to compute Gramm-Schmidt,  but uses double precision in the search phase of the block reduction  algorithm.  This seems adequate for most purposes, and is faster  than QP, which uses quad_float precision uniformly throughout.******************** How to choose? *********************I think it is safe to say that nobody really understandshow the LLL algorithms works.  The theoretical analyses are a long wayfrom describing what "really" happens in practice.  Choosing the bestvariant for a certain application ultimately is a matter of trialand error.The first thing to try is LLL_FP.It is the fastest of the routines, and is adequate for many applications.If there are precision problems, you will most likely geta warning message, something like "warning--relaxing reduction".If there are overflow problems, you should get an error messagesaying that the numbers are too big.If either of these happens, the next thing to try is G_LLL_FP,which uses the somewhat slower, but more stable, Givens rotations.This approach also has the nice property that the numbers remainsmaller, so there is less chance of an overflow.If you are still having precision problems with G_LLL_FP,try LLL_QP or G_LLL_QP, which uses quadratic precision.If you are still having overflow problems, try LLL_XD or G_LLL_XD.I haven't yet come across a case where one *really* needs theextra precision available in the RR variants.All of the above discussion applies to the BKZ variants as well.In addition, if you have a matrix with really big entries, you might try using G_LLL_FP or LLL_XD first to reduce the sizes of the numbers,before running one of the BKZ variants.Also, one shouldn't rule out using the "all integer" LLL routines.For some highly structured matrices, this is not necessarilymuch worse than some of the floating point versions, and canunder certain circumstances even be better.******************** Implementation notes *********************For all the floating point variants, I use a "relaxed" size reductioncondition.  Normally in LLL one makes all |\mu_{i,j}| <= 1/2.Howver, this can easily lead to infinite loops in floating point arithemetic.So I use the condition |\mu_{i,j}| <= 1/2 + fudge, where fudge is a very small number.  Even with this, one can fall into an infinite loop.To handle this situation, I added some logic that detects, at quite low cost,when an infinite loop has been entered.  When that happens, fudgeis replaced by fudge*2, and a warning message "relaxing reduction condition"is printed.   We may do this relaxation several times.If fudge gets too big, we give up and abort, except that LLL_FP and BKZ_FP make one last attempt to recover:  they try to compute theGramm-Schmidt coefficients using RR and continue.  As described above,if you run into these problems, which you'll see in the error/warningmessages, it is more effective to use the QP and/or Givens variants.For the Gramm-Schmidt orthogonalization, lots of "bookeeping" is doneto avoid computing the same things twice.For the Givens orthogonalization, we cannot do so many bookeeping tricks.Instead, we "cache" a certain amount of information, whichallows us to avoid computing certain things over and over again.There are many other hacks and tricks to speed things up even further.For example, if the matrix elements are small enough to fit indouble precision floating point, the algorithms avoid almostall big integer arithmetic.  This is done in a dynamic, on-linefashion, so even if the numbers start out big, whenever theyget small, we automatically switch to floating point arithmetic.\**************************************************************************//**************************************************************************\                         Other Stuff\**************************************************************************/void ComputeGS(const mat_ZZ& B, mat_RR& mu, vec_RR& c);// Computes Gramm-Schmidt data for B.  Assumes B is an m x n matrix of// rank m.  Let if { B^*(i) } is the othogonal basis, then c(i) =// |B^*(i)|^2, and B^*(i) = B(i) - \sum_{j=1}^{i-1} mu(i,j) B^*(j).void NearVector(vec_ZZ& w, const mat_ZZ& B, const vec_ZZ& a);// Computes a vector w that is an approximation to the closest vector// in the lattice spanned by B to a, using the "closest plane"// algorithm from [Babai, Combinatorica 6:1-13, 1986].  B must be a// square matrix, and it is assumed that B is already LLL or BKZ// reduced (the better the reduction the better the approximation).// Note that arithmetic in RR is used with the current value of// RR::precision().// NOTE: Both of these routines use classical Gramm-Schmidt// orthogonalization.

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
亚洲欧美中日韩| 国产精品国模大尺度视频| 国产三级精品视频| 最新成人av在线| 亚洲一区二区三区四区五区中文 | eeuss影院一区二区三区| 欧美日韩一区在线| 久久久久国产精品厨房| 亚洲美女免费视频| 国v精品久久久网| 欧美久久久久久久久久| 国产精品嫩草影院av蜜臀| 久久成人麻豆午夜电影| 色综合久久久久久久久久久| 4438x成人网最大色成网站| 欧美高清dvd| 国产精品第13页| 精品在线播放免费| 91精品国产91热久久久做人人 | 777xxx欧美| 国产精品久线观看视频| 奇米四色…亚洲| 在线观看日产精品| 欧美国产日本视频| 奇米色一区二区三区四区| 色一情一伦一子一伦一区| 亚洲制服丝袜一区| 日本丰满少妇一区二区三区| 久久久精品日韩欧美| 亚洲成av人片在线观看| 91猫先生在线| 中文字幕在线不卡视频| 国产成人精品亚洲777人妖| 日韩精品一区二区三区在线| 香蕉久久夜色精品国产使用方法 | 精品一区二区在线观看| 日韩欧美色电影| 青青草91视频| 777奇米四色成人影色区| 麻豆精品视频在线观看视频| 欧美日韩国产精品成人| 日韩国产在线观看一区| 欧美三级日韩三级| 一区二区三区不卡视频| 粉嫩欧美一区二区三区高清影视 | 午夜av电影一区| 色久综合一二码| 亚洲精品五月天| 91成人在线精品| 日本va欧美va瓶| 91精品国产综合久久精品| 亚洲欧美国产毛片在线| 欧美性猛片aaaaaaa做受| 一区二区三区日本| 欧美一区二区美女| 久久精品国产一区二区三区免费看| 欧美人妇做爰xxxⅹ性高电影| 亚洲国产精品久久久久秋霞影院| 欧美色图一区二区三区| 亚洲3atv精品一区二区三区| 欧美日韩大陆在线| 国产伦理精品不卡| 国产三级精品三级在线专区| 91香蕉国产在线观看软件| 亚洲国产一区在线观看| 91麻豆精品国产91久久久资源速度| 极品美女销魂一区二区三区| 久久久久久久一区| 成人精品电影在线观看| 一区二区三区波多野结衣在线观看| 欧美三级在线视频| 床上的激情91.| 一区二区三区高清| 欧美xxxx老人做受| 欧美自拍丝袜亚洲| 韩国精品主播一区二区在线观看| 一区二区三区丝袜| 精品国产免费人成电影在线观看四季 | av网站免费线看精品| 一区二区三区免费看视频| 2017欧美狠狠色| 一本色道**综合亚洲精品蜜桃冫| 麻豆高清免费国产一区| 国产精品丝袜91| 91麻豆精品国产91久久久使用方法| 国产成人亚洲综合色影视| 亚洲欧美日韩国产综合| 久久青草欧美一区二区三区| 色婷婷精品久久二区二区蜜臂av| 国产一区二区中文字幕| 一区二区三区久久| 国产日韩精品一区二区三区在线| 欧美日韩精品欧美日韩精品一| 国产麻豆视频精品| 九九久久精品视频| 亚洲宅男天堂在线观看无病毒| 欧美xxxx老人做受| 日韩欧美一级片| 色呦呦日韩精品| 99热这里都是精品| 国产毛片精品视频| 日产国产欧美视频一区精品| 亚洲自拍偷拍综合| 国产精品第四页| 中文字幕中文在线不卡住| 精品少妇一区二区三区| 色94色欧美sute亚洲线路一久| 成人av在线电影| 国产高清一区日本| 国产.欧美.日韩| 精品一区二区日韩| 五月婷婷激情综合网| 日韩在线一区二区三区| 一区二区欧美国产| 午夜电影久久久| 亚洲最色的网站| 亚洲在线视频网站| 亚洲影院久久精品| 亚洲老司机在线| 日韩制服丝袜先锋影音| 亚洲一区二区中文在线| 日本中文字幕一区二区视频 | 欧美视频在线观看一区二区| 91网站在线播放| 99久久久国产精品免费蜜臀| 国产自产2019最新不卡| 韩国在线一区二区| 久久99精品久久久久久国产越南| 三级久久三级久久| 亚洲免费在线看| 人人精品人人爱| 奇米影视7777精品一区二区| 国内精品不卡在线| 国产精品1区2区3区在线观看| 青娱乐精品在线视频| 日韩国产在线观看| 久久99国产精品成人| 国产91精品精华液一区二区三区| 国产高清在线观看免费不卡| 91免费视频观看| 欧日韩精品视频| 欧美喷水一区二区| 久久精品欧美一区二区三区麻豆| 精品美女在线观看| 亚洲视频1区2区| 亚洲第一成人在线| 国产精品一卡二卡在线观看| 成人一级视频在线观看| 不卡一区二区中文字幕| 欧美一个色资源| 久久夜色精品一区| 午夜私人影院久久久久| 免费在线观看日韩欧美| av在线不卡免费看| 欧美色综合天天久久综合精品| 欧美军同video69gay| 欧美韩日一区二区三区四区| 亚洲精品视频免费观看| 国产真实乱偷精品视频免| 成人精品鲁一区一区二区| 91免费观看在线| 日韩视频中午一区| 国产精品久久久99| 青青草精品视频| 成人aaaa免费全部观看| 日韩美女一区二区三区四区| 亚洲国产精品黑人久久久| 秋霞成人午夜伦在线观看| 成人激情校园春色| 精品视频一区二区三区免费| 国产精品视频九色porn| 亚洲成人自拍网| 岛国精品在线观看| 欧美一级电影网站| 亚洲精品午夜久久久| 成人午夜私人影院| 欧美mv日韩mv| 一区二区三区欧美亚洲| 国产成人综合在线播放| 欧美色精品在线视频| 亚洲精品菠萝久久久久久久| 激情综合色播激情啊| 日韩免费在线观看| 亚洲综合网站在线观看| 91在线精品一区二区三区| 欧美精品一区二区三区蜜桃视频| 国产精品天干天干在线综合| 国产一区 二区| 欧美精品久久99久久在免费线| 亚洲制服丝袜一区| 96av麻豆蜜桃一区二区| 欧美国产精品中文字幕| 极品少妇xxxx精品少妇| 欧美视频在线一区| 手机精品视频在线观看| 色婷婷av一区二区三区大白胸| 亚洲精品视频一区| www.一区二区| 亚洲精品日产精品乱码不卡| 成人激情小说网站|