亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? lll.txt

?? 一個比較通用的大數運算庫
?? TXT
?? 第 1 頁 / 共 2 頁
字號:
*** Reduction Condition:

  -- LLL: the classical LLL reduction condition.

  -- BKZ: Block Korkin-Zolotarev reduction.
     This is slower, but yields a higher-quality basis,
     i.e., one with shorter vectors.
     See the Schnorr-Euchner paper for a description of this.
     This basically generalizes the LLL reduction condition
     from blocks of size 2 to blocks of larger size.


************* Calling Syntax for LLL routines ***************

long 
[G_]LLL_{FP,QP,XD,RR} (mat_ZZ& B, [ mat_ZZ& U, ] double delta = 0.99, 
                       long deep = 0, LLLCheckFct check = 0, long verbose = 0);

* The [ ... ] notation indicates something optional,
  and the { ... } indicates something that is chosen from
  among several alternatives.

* The return value is the rank of B (but see below if check != 0).

* The optional prefix G_ indicates that Givens rotations are to be used;
  otherwise, classical Gramm-Schmidt is used.

* The choice FP, QP, XD, RR determines the precision used.

* If the optional parameter U is given, then U is computed
  as the transition matrix:

     U * old_B = new_B

* The optional argument "delta" is the reduction parameter, and may
  be set so that 0.50 <= delta < 1.  Setting it close to 1 yields
  shorter vectors, and also improves the stability, but increases the
  running time.  Recommended value: delta = 0.99.

* The optional parameter "deep" can be set to any positive integer,
  which allows "deep insertions" of row k into row i, provided i <=
  deep or k-i <= deep.  Larger values of deep will usually yield
  shorter vectors, but the running increases exponentially.  

  NOTE: use of "deep" is obsolete, and has been "deprecated".
  It is recommended to use BKZ_FP to achieve higher-quality reductions.
  Moreover, the Givens versions do not support "deep", and setting
  deep != 0 will raise an error in this case.

* The optional parameter "check" is a function that is invoked after
  each size reduction with the current row as an argument.  If this
  function returns a non-zero value, the LLL procedure is immediately
  terminated.  Note that it is possible that some linear dependencies
  remain undiscovered, so that the calculated rank value is in fact
  too large.  In any case, zero rows discovered by the algorithm
  will be placed at the beginning, as usual.

  The check argument (if not zero) should be a routine taking
  a const vec_ZZ& as an argument and return value of type long.
  LLLCheckFct is defined via a typedef as:

     typedef long (*LLLCheckFct)(const vec_ZZ&);

  See the file subset.c for an example of the use of this feature.

* The optional parameter "verbose" can be set to see all kinds of fun
  things printed while the routine is executing.  A status report is
  printed every once in a while, and the current basis is optionally
  dumped to a file.  The behavior can be controlled with these global
  variables:

     extern char *LLLDumpFile;  // file to dump basis, 0 => no dump; 
                                // initially 0

     extern double LLLStatusInterval; // seconds between status reports 
                                      // initially 900s = 15min



 
************* Calling Syntax for BKZ routines ***************

long 
[G_]BKZ_{FP,QP,QP1,XD,RR} (mat_ZZ& B, [ mat_ZZ& U, ] double delta=0.99,
                          long BlockSize=10, long prune=0, 
                          LLLCheckFct check = 0, long verbose = 0);

These functions are equivalent to the LLL routines above,
except that Block Korkin-Zolotarev reduction is applied.
We describe here only the differences in the calling syntax.

* The optional parameter "BlockSize" specifies the size of the blocks
  in the reduction.  High values yield shorter vectors, but the
  running time increases exponentially with BlockSize.
  BlockSize should be between 2 and the number of rows of B.

* The optional parameter "prune" can be set to any positive number to
  invoke the Volume Heuristic from [Schnorr and Horner, Eurocrypt
  '95].  This can significantly reduce the running time, and hence
  allow much bigger block size, but the quality of the reduction is
  of course not as good in general.  Higher values of prune mean
  better quality, and slower running time.  
  When prune == 0, pruning is disabled.
  Recommended usage: for BlockSize >= 30, set 10 <= prune <= 15.

* The QP1 variant uses quad_float precision to compute Gramm-Schmidt,
  but uses double precision in the search phase of the block reduction
  algorithm.  This seems adequate for most purposes, and is faster
  than QP, which uses quad_float precision uniformly throughout.


******************** How to choose? *********************

I think it is safe to say that nobody really understands
how the LLL algorithm works.  The theoretical analyses are a long way
from describing what "really" happens in practice.  Choosing the best
variant for a certain application ultimately is a matter of trial
and error.

The first thing to try is LLL_FP.
It is the fastest of the routines, and is adequate for many applications.

If there are precision problems, you will most likely get
a warning message, something like "warning--relaxing reduction".
If there are overflow problems, you should get an error message
saying that the numbers are too big.

If either of these happens, the next thing to try is G_LLL_FP,
which uses the somewhat slower, but more stable, Givens rotations.
This approach also has the nice property that the numbers remain
smaller, so there is less chance of an overflow.

If you are still having precision problems with G_LLL_FP,
try LLL_QP or G_LLL_QP, which uses quadratic precision.

If you are still having overflow problems, try LLL_XD or G_LLL_XD.

I haven't yet come across a case where one *really* needs the
extra precision available in the RR variants.

All of the above discussion applies to the BKZ variants as well.
In addition, if you have a matrix with really big entries, you might try 
using G_LLL_FP or LLL_XD first to reduce the sizes of the numbers,
before running one of the BKZ variants.

Also, one shouldn't rule out using the "all integer" LLL routines.
For some highly structured matrices, this is not necessarily
much worse than some of the floating point versions, and can
under certain circumstances even be better.


******************** Implementation notes *********************

For all the floating point variants, I use a "relaxed" size reduction
condition.  Normally in LLL one makes all |\mu_{i,j}| <= 1/2.
However, this can easily lead to infinite loops in floating point arithemetic.
So I use the condition |\mu_{i,j}| <= 1/2 + fudge, where fudge is 
a very small number.  Even with this, one can fall into an infinite loop.
To handle this situation, I added some logic that detects, at quite low cost,
when an infinite loop has been entered.  When that happens, fudge
is replaced by fudge*2, and a warning message "relaxing reduction condition"
is printed.   We may do this relaxation several times.
If fudge gets too big, we give up and abort, except that 
LLL_FP and BKZ_FP make one last attempt to recover:  they try to compute the
Gramm-Schmidt coefficients using RR and continue.  As described above,
if you run into these problems, which you'll see in the error/warning
messages, it is more effective to use the QP and/or Givens variants.

For the Gramm-Schmidt orthogonalization, lots of "bookeeping" is done
to avoid computing the same thing twice.

For the Givens orthogonalization, we cannot do so many bookeeping tricks.
Instead, we "cache" a certain amount of information, which
allows us to avoid computing certain things over and over again.

There are many other hacks and tricks to speed things up even further.
For example, if the matrix elements are small enough to fit in
double precision floating point, the algorithms avoid almost
all big integer arithmetic.  This is done in a dynamic, on-line
fashion, so even if the numbers start out big, whenever they
get small, we automatically switch to floating point arithmetic.

\**************************************************************************/




/**************************************************************************\

                         Other Stuff

\**************************************************************************/



void ComputeGS(const mat_ZZ& B, mat_RR& mu, vec_RR& c);

// Computes Gramm-Schmidt data for B.  Assumes B is an m x n matrix of
// rank m.  Let if { B^*(i) } is the othogonal basis, then c(i) =
// |B^*(i)|^2, and B^*(i) = B(i) - \sum_{j=1}^{i-1} mu(i,j) B^*(j).

void NearVector(vec_ZZ& w, const mat_ZZ& B, const vec_ZZ& a);

// Computes a vector w that is an approximation to the closest vector
// in the lattice spanned by B to a, using the "closest plane"
// algorithm from [Babai, Combinatorica 6:1-13, 1986].  B must be a
// square matrix, and it is assumed that B is already LLL or BKZ
// reduced (the better the reduction the better the approximation).
// Note that arithmetic in RR is used with the current value of
// RR::precision().

// NOTE: Both of these routines use classical Gramm-Schmidt
// orthogonalization.


?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产丝袜在线精品| 欧美一二三四区在线| 中文字幕中文字幕在线一区| 国产精品99久久久久久久女警| 欧美精品一区二| 国产二区国产一区在线观看| 国产精品麻豆视频| 色综合 综合色| 亚洲国产日韩精品| 欧美系列亚洲系列| 久久疯狂做爰流白浆xx| 国产午夜精品理论片a级大结局 | 久久综合狠狠综合久久激情| 九九久久精品视频| 中文字幕精品—区二区四季| 一本一本大道香蕉久在线精品| 亚洲国产精品久久久久婷婷884 | 欧美日韩在线三区| 免费观看一级欧美片| 国产午夜亚洲精品羞羞网站| 99久久精品国产一区| 亚洲福利一区二区| 26uuu欧美| 91在线视频播放地址| 天天综合色天天综合色h| 久久网站最新地址| 一本久久a久久精品亚洲| 日韩精品国产欧美| 国产精品国模大尺度视频| 欧美少妇一区二区| 国产精品中文字幕日韩精品| 一区二区三区免费网站| 日韩欧美久久久| 99久久国产免费看| 蜜臀av国产精品久久久久 | 欧美激情在线一区二区三区| 色悠悠久久综合| 美腿丝袜一区二区三区| 亚洲色欲色欲www| 欧美一级黄色大片| 色婷婷综合五月| 国产精品亚洲成人| 视频一区欧美日韩| 亚洲欧美日韩国产综合| 26uuu久久天堂性欧美| 欧美亚洲综合久久| 成人免费三级在线| 久久 天天综合| 婷婷综合久久一区二区三区| 亚洲国产高清aⅴ视频| 日韩一卡二卡三卡国产欧美| av综合在线播放| 国产又黄又大久久| 日韩精品一级二级 | 成人黄色av电影| 久久国产日韩欧美精品| 亚洲成人免费视| 国产精品不卡在线观看| 日韩午夜精品电影| 欧美亚男人的天堂| 91网站最新地址| 成人国产精品视频| 国产成人自拍在线| 麻豆国产欧美一区二区三区| 亚洲成av人影院在线观看网| 亚洲色欲色欲www| 国产精品久久久久久久蜜臀| 精品国产91亚洲一区二区三区婷婷| 在线亚洲+欧美+日本专区| 成人免费观看av| 国产福利91精品一区二区三区| 免费看精品久久片| 蜜桃在线一区二区三区| 日产精品久久久久久久性色| 亚洲成人手机在线| 亚洲福利一区二区| 亚洲国产成人porn| 亚洲国产成人av网| 亚洲午夜久久久久久久久电影院| 亚洲精品一二三区| 一区二区三区四区高清精品免费观看| 国产精品二三区| 国产精品初高中害羞小美女文 | 国产精品亲子乱子伦xxxx裸| 国产亚洲一区二区三区| 国产午夜亚洲精品理论片色戒| 精品国产乱码久久久久久蜜臀| 欧美成人三级电影在线| 日韩一区二区中文字幕| 精品久久久网站| 久久免费视频色| 国产日韩三级在线| 国产精品久久久久久久浪潮网站| 中文字幕亚洲一区二区av在线| 亚洲美女视频在线观看| 亚洲午夜三级在线| 奇米888四色在线精品| 久久国产视频网| 国产91丝袜在线播放九色| 成人毛片视频在线观看| 色综合久久综合网欧美综合网| 欧美性猛交xxxx乱大交退制版| 欧美蜜桃一区二区三区| 欧美mv日韩mv| 亚洲国产精品成人综合| 国产精品久久久久久久久动漫| 亚洲欧美日韩国产成人精品影院| 亚洲成人手机在线| 久草中文综合在线| 99久久综合国产精品| 欧美日韩高清一区| 日韩精品一区二区三区视频播放| 国产日韩欧美不卡| 亚洲主播在线播放| 国产一区在线观看麻豆| 色综合久久综合中文综合网| 欧美一区欧美二区| 中文字幕日本乱码精品影院| 午夜精品一区二区三区电影天堂| 另类的小说在线视频另类成人小视频在线 | 亚洲bt欧美bt精品777| 蜜臀va亚洲va欧美va天堂| 国产福利一区在线| 欧美日韩国产免费一区二区| 久久久久久久久久看片| 亚洲18色成人| 懂色av中文一区二区三区| 欧美蜜桃一区二区三区| 国产精品嫩草久久久久| 日本强好片久久久久久aaa| 不卡一区中文字幕| 精品国免费一区二区三区| 亚洲欧美日韩在线播放| 国产乱对白刺激视频不卡| 欧美性猛交一区二区三区精品| 久久久亚洲精华液精华液精华液| 亚洲成a人v欧美综合天堂 | 亚洲电影一级片| 懂色中文一区二区在线播放| 91精品欧美一区二区三区综合在| 18涩涩午夜精品.www| 极品瑜伽女神91| 欧美日韩一区二区三区高清| 国产精品人成在线观看免费| 日韩黄色免费网站| 在线精品国精品国产尤物884a| 久久毛片高清国产| 美女视频免费一区| 欧美三级电影在线看| 自拍偷自拍亚洲精品播放| 国内成人免费视频| 欧美一区二区三区视频免费| 亚洲尤物在线视频观看| 波多野结衣亚洲一区| 久久精品视频在线看| 久久精品国产一区二区| 欧美精品欧美精品系列| 一区二区三区在线免费观看| 成人免费视频播放| 中文成人综合网| 国产一区 二区 三区一级| 欧美一区二区女人| 日韩电影在线观看网站| 欧美三级韩国三级日本三斤| 夜夜嗨av一区二区三区网页 | 欧美经典三级视频一区二区三区| 人妖欧美一区二区| 在线成人av网站| 亚洲成人777| 欧美高清www午色夜在线视频| 亚洲一区二区在线视频| 欧美天堂一区二区三区| 亚洲综合丁香婷婷六月香| 欧美在线免费观看亚洲| 一区二区免费在线播放| 欧美丝袜丝交足nylons图片| 亚洲成年人影院| 欧美一区二区在线视频| 美女网站视频久久| 精品成人a区在线观看| 美腿丝袜亚洲综合| 精品国产欧美一区二区| 国产mv日韩mv欧美| 亚洲男帅同性gay1069| 91九色最新地址| 婷婷综合另类小说色区| 日韩一级二级三级| 国产毛片一区二区| 成人免费一区二区三区在线观看| 97国产精品videossex| 一区二区三区四区在线免费观看| 在线视频国内一区二区| 午夜精品福利视频网站| 日韩精品一区二区三区四区| 国产成人在线网站| 一区二区三区在线视频免费| 69堂成人精品免费视频| 国产中文字幕一区| 1000部国产精品成人观看| 91精品办公室少妇高潮对白|