亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? cvs2cl.pl

?? 打魔獸戰網的都知道他是什么
?? PL
?? 第 1 頁 / 共 4 頁
字號:
    # want to show that fact as well, so we collect all the branches    # that this is the latest ancestor of and store them in    # @branch_roots.  Just for reference, the format of the line we're    # seeing at this point is:    #    #    branches:  1.5.2;  1.5.4;  ...;    #    # Okay, here goes:    if (/^branches:\s+(.*);$/)    {      if ($Show_Branches)      {        my $lst = $1;        $lst =~ s/(1\.)+1;|(1\.)+1$//;  # ignore the trivial branch 1.1.1        if ($lst) {          @branch_roots = split (/;\s+/, $lst);        }        else {          undef @branch_roots;        }        next;      }      else      {        # Ugh.  This really bothers me.  Suppose we see a log entry        # like this:        #        #    ----------------------------        #    revision 1.1        #    date: 1999/10/17 03:07:38;  author: jrandom;  state: Exp;        #    branches:  1.1.2;        #    Intended first line of log message begins here.        #    ----------------------------        #        # The question is, how we can tell the difference between that        # log message and a *two*-line log message whose first line is        #         #    "branches:  1.1.2;"        #        # See the problem?  The output of "cvs log" is inherently        # ambiguous.        #        # For now, we punt: we liberally assume that people don't        # write log messages like that, and just toss a "branches:"        # line if we see it but are not showing branches.  I hope no        # one ever loses real log data because of this.        next;      }    }    # If have file name, time, and author, then we're just grabbing    # log message texts:    $detected_file_separator = /^$file_separator$/o;    if ($detected_file_separator && ! (defined $revision)) {      # No revisions for this file; can happen, e.g. "cvs log -d DATE"      goto CLEAR;    }    unless ($detected_file_separator || /^$logmsg_separator$/o)    {      $msg_txt .= $_;   # Normally, just accumulate the message...      next;    }    # ... until a msg separator is encountered:    # Ensure the message contains something:    if ((! $msg_txt)        || ($msg_txt =~ /^\s*\.\s*$|^\s*$/)        || ($msg_txt =~ /\*\*\* empty log message \*\*\*/))     {      if ($Prune_Empty_Msgs) {        goto CLEAR;      }      # else      $msg_txt = "[no log message]\n";    }    ### Store it all in the Grand Poobah:    {      my $dir_key;        # key into %grand_poobah      my %qunk;           # complicated little jobbie, see below      # Each revision of a file has a little data structure (a `qunk')       # associated with it.  That data structure holds not only the      # file's name, but any additional information about the file      # that might be needed in the output, such as the revision      # number, tags, branches, etc.  The reason to have these things      # arranged in a data structure, instead of just appending them      # textually to the file's name, is that we may want to do a      # little rearranging later as we write the output.  For example,      # all the files on a given tag/branch will go together, followed      # by the tag in parentheses (so trunk or otherwise non-tagged      # files would go at the end of the file list for a given log      # message).  This rearrangement is a lot easier to do if we      # don't have to reparse the text.      #      # A qunk looks like this:      #      #   {       #     filename    =>    "hello.c",      #     revision    =>    "1.4.3.2",      #     time        =>    a timegm() return value (moment of commit)      #     tags        =>    [ "tag1", "tag2", ... ],      #     branch      =>    "branchname" # There should be only one, right?      #     branchroots =>    [ "branchtag1", "branchtag2", ... ]      #   }      if ($Distributed) {        # Just the basename, don't include the path.        ($qunk{'filename'}, $dir_key, undef) = fileparse ($file_full_path);      }      else {        $dir_key = "./";        $qunk{'filename'} = $file_full_path;      }      # This may someday be used in a more sophisticated calculation      # of what other files are involved in this commit.  For now, we      # don't use it, because the common-commit-detection algorithm is      # hypothesized to be "good enough" as it stands.      $qunk{'time'} = $time;      # We might be including revision numbers and/or tags and/or      # branch names in the output.  Most of the code from here to      # loop-end deals with organizing these in qunk.      $qunk{'revision'} = $revision;      # Grab the branch, even though we may or may not need it:      $qunk{'revision'} =~ /((?:\d+\.)+)\d+/;      my $branch_prefix = $1;      $branch_prefix =~ s/\.$//;  # strip off final dot      if ($branch_names{$branch_prefix}) {        $qunk{'branch'} = $branch_names{$branch_prefix};      }      # If there's anything in the @branch_roots array, then this      # revision is the root of at least one branch.  We'll display      # them as branch names instead of revision numbers, the      # substitution for which is done directly in the array:      if (@branch_roots) {        my @roots = map { $branch_names{$_} } @branch_roots;        $qunk{'branchroots'} = \@roots;      }      # Save tags too.      if (defined ($symbolic_names{$revision})) {        $qunk{'tags'} = $symbolic_names{$revision};        delete $symbolic_names{$revision};      }      # Add this file to the list      # (We use many spoonfuls of autovivication magic. Hashes and arrays      # will spring into existence if they aren't there already.)      &debug ("(pushing log msg for ${dir_key}$qunk{'filename'})\n");      # Store with the files in this commit.  Later we'll loop through      # again, making sure that revisions with the same log message      # and nearby commit times are grouped together as one commit.      push (@{$grand_poobah{$dir_key}{$author}{$time}{$msg_txt}}, \%qunk);    }  CLEAR:    # Make way for the next message    undef $msg_txt;    undef $time;    undef $revision;    undef $author;    undef @branch_roots;    # Maybe even make way for the next file:    if ($detected_file_separator) {      undef $file_full_path;      undef %branch_names;      undef %branch_numbers;      undef %symbolic_names;    }  }  close (LOG_SOURCE);  ### Process each ChangeLog  while (my ($dir,$authorhash) = each %grand_poobah)  {    &debug ("DOING DIR: $dir\n");    # Here we twist our hash around, from being    #   author => time => message => filelist    # in %$authorhash to    #   time => author => message => filelist    # in %changelog.      #    # This is also where we merge entries.  The algorithm proceeds    # through the timeline of the changelog with a sliding window of    # $Max_Checkin_Duration seconds; within that window, entries that    # have the same log message are merged.    #    # (To save space, we zap %$authorhash after we've copied    # everything out of it.)     my %changelog;    while (my ($author,$timehash) = each %$authorhash)    {      my $lasttime;      my %stamptime;      foreach my $time (sort {$main::a <=> $main::b} (keys %$timehash))      {        my $msghash = $timehash->{$time};        while (my ($msg,$qunklist) = each %$msghash)        { 	  my $stamptime = $stamptime{$msg};          if ((defined $stamptime)              and (($time - $stamptime) < $Max_Checkin_Duration)              and (defined $changelog{$stamptime}{$author}{$msg}))          { 	    push(@{$changelog{$stamptime}{$author}{$msg}}, @$qunklist);          }          else {            $changelog{$time}{$author}{$msg} = $qunklist;            $stamptime{$msg} = $time;          }        }      }    }    undef (%$authorhash);    ### Now we can write out the ChangeLog!    my ($logfile_here, $logfile_bak, $tmpfile);    if (! $Output_To_Stdout) {      $logfile_here =  $dir . $Log_File_Name;      $logfile_here =~ s/^\.\/\//\//;   # fix any leading ".//" problem      $tmpfile      = "${logfile_here}.cvs2cl$$.tmp";      $logfile_bak  = "${logfile_here}.bak";      open (LOG_OUT, ">$tmpfile") or die "Unable to open \"$tmpfile\"";    }    else {      open (LOG_OUT, ">-") or die "Unable to open stdout for writing";    }    print LOG_OUT $ChangeLog_Header;    if ($XML_Output) {      print LOG_OUT "<?xml version=\"1.0\"?>\n\n"          . "<changelog xmlns=\"http://www.red-bean.com/xmlns/cvs2cl/\">\n\n";    }    foreach my $time (sort {$main::b <=> $main::a} (keys %changelog))    {      my $authorhash = $changelog{$time};      while (my ($author,$mesghash) = each %$authorhash)      {        # If XML, escape in outer loop to avoid compound quoting:        if ($XML_Output) {          $author = &xml_escape ($author);        }        while (my ($msg,$qunklist) = each %$mesghash)        {          my $files               = &pretty_file_list ($qunklist);          my $header_line;          # date and author          my $body;                 # see below          my $wholething;           # $header_line + $body          # Set up the date/author line.          # kff todo: do some more XML munging here, on the header          # part of the entry:          my ($ignore,$min,$hour,$mday,$mon,$year,$wday)              = $UTC_Times ? gmtime($time) : localtime($time);          # XML output includes everything else, we might as well make          # it always include Day Of Week too, for consistency.          if ($Show_Day_Of_Week or $XML_Output) {            $wday = ("Sunday", "Monday", "Tuesday", "Wednesday",                     "Thursday", "Friday", "Saturday")[$wday];            $wday = ($XML_Output) ? "<weekday>${wday}</weekday>\n" : " $wday";          }          else {            $wday = "";          }          if ($XML_Output) {            $header_line =                 sprintf ("<date>%4u-%02u-%02u</date>\n"                         . "${wday}"                         . "<time>%02u:%02u</time>\n"                         . "<author>%s</author>\n",                         $year+1900, $mon+1, $mday, $hour, $min, $author);          }          else {            $header_line =                 sprintf ("%4u-%02u-%02u${wday} %02u:%02u  %s\n\n",                         $year+1900, $mon+1, $mday, $hour, $min, $author);          }          # Reshape the body according to user preferences.          if ($XML_Output)           {            $msg = &preprocess_msg_text ($msg);            $body = $files . $msg;          }          elsif ($No_Wrap)           {            $msg = &preprocess_msg_text ($msg);            $files = wrap ("\t", "	", "$files");            $msg =~ s/\n(.*)/\n\t$1/g;            unless ($After_Header eq " ") {              $msg =~ s/^(.*)/\t$1/g;            }            $body = $files . $After_Header . $msg;          }          else  # do wrapping, either FSF-style or regular          {            if ($FSF_Style)            {              $files = wrap ("\t", "        ", "$files");                            my $files_last_line_len = 0;              if ($After_Header eq " ")              {                $files_last_line_len = &last_line_len ($files);                $files_last_line_len += 1;  # for $After_Header              }                            $msg = &wrap_log_entry                  ($msg, "\t", 69 - $files_last_line_len, 69);              $body = $files . $After_Header . $msg;            }            else  # not FSF-style            {              $msg = &preprocess_msg_text ($msg);              $body = $files . $After_Header . $msg;              $body = wrap ("\t", "        ", "$body");            }          }          $wholething = $header_line . $body;          if ($XML_Output) {            $wholething = "<entry>\n${wholething}</entry>\n";          }          # One last check: make sure it passes the regexp test, if the          # user asked for that.  We have to do it here, so that the          # test can match against information in the header as well          # as in the text of the log message.          # How annoying to duplicate so much code just because I          # can't figure out a way to evaluate scalars on the trailing          # operator portion of a regular expression.  Grrr.          if ($Case_Insensitive) {            unless ($Regexp_Gate && ($wholething =~ /$Regexp_Gate/oi)) {               print LOG_OUT "${wholething}\n";            }          }          else {            unless ($Regexp_Gate && ($wholething =~ /$Regexp_Gate/o)) {               print LOG_OUT "${wholething}\n";            }          }        }      }    }    if ($XML_Output) {      print LOG_OUT "</changelog>\n";    }    close (LOG_OUT);    if (! $Output_To_Stdout)     {      # If accumulating, append old data to new before renaming.  But      # don't append the most recent entry, since it's already in the      # new log due to CVS's idiosyncratic interpretation of "log -d".      if ($Cumulative && -f $logfile_here)      {        open (NEW_LOG, ">>$tmpfile")            or die "trouble appending to $tmpfile ($!)";        open (OLD_LOG, "<$logfile_here")            or die "trouble reading from $logfile_here ($!)";        my $started_first_entry = 0;        my $passed_first_entry = 0;        while (<OLD_LOG>)         {          if (! $passed_first_entry)          {            if ((! $started_first_entry)                && /^(\d\d\d\d-\d\d-\d\d\s+\d\d:\d\d)/) {              $started_first_entry = 1;            }            elsif (/^(\d\d\d\d-\d\d-\d\d\s+\d\d:\d\d)/) {              $passed_first_entry = 1;              print NEW_LOG $_;            }          }          else {            print NEW_LOG $_;          }        }        close (NEW_LOG);        close (OLD_LOG);      }      if (-f $logfile_here) {        rename ($logfile_here, $logfile_bak);      }       rename ($tmpfile, $logfile_here);    }  }}sub parse_date_and_author (){  # Parses the date/time and author out of a line like:   #  # date: 1999/02/19 23:29:05;  author: apharris;  state: Exp;  my $line = shift;  my ($year, $mon, $mday, $hours, $min, $secs, $author) = $line =~      m#(\d+)/(\d+)/(\d+)\s+(\d+):(\d+):(\d+);\s+author:\s+([^;]+);#          or  die "Couldn't parse date ``$line''";  die "Bad date or Y2K issues" unless ($year > 1969 and $year < 2258);  # Kinda arbitrary, but useful as a sanity check  my $time = timegm($secs,$min,$hours,$mday,$mon-1,$year-1900);  return ($time, $author);}# Here we take a bunch of qunks and convert them into printed# summary that will include all the information the user asked for.sub pretty_file_list (){  if ($Hide_Filenames and (! $XML_Output)) {    return "";  }  my $qunksref = shift;  my @qunkrefs = @$qunksref;  my @filenames;  my $beauty = "";          # The accumulating header string for this entry.  my %non_unanimous_tags;   # Tags found in a proper subset of qunks  my %unanimous_tags;       # Tags found in all qunks  my %all_branches;         # Branches found in any qunk  my $common_dir = undef;   # Dir prefix common to all files ("" if none)  my $fbegun = 0;           # Did we begin printing filenames yet?    # First, loop over the qunks gathering all the tag/branch names.  # We'll put them all in non_unanimous_tags, and take out the  # unanimous ones later.  foreach my $qunkref (@qunkrefs)   {    # Keep track of whether all the files in this commit were in the    # same directory, and memorize it if so.  We can make the output a    # little more compact by mentioning the directory only once.    if ((scalar (@qunkrefs)) > 1)    {

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
另类小说欧美激情| 久久精品免视看| 欧美成人aa大片| 国产天堂亚洲国产碰碰| 一色屋精品亚洲香蕉网站| 亚洲午夜视频在线| 国产又黄又大久久| 色爱区综合激月婷婷| 欧美一级久久久| 久久久久久99久久久精品网站| 狠狠色丁香久久婷婷综| 色偷偷成人一区二区三区91| 精品影视av免费| 亚洲精品乱码久久久久久| 亚洲成人在线免费| 黑人巨大精品欧美一区| 美女一区二区久久| 五月天亚洲婷婷| 福利一区二区在线| 日韩成人伦理电影在线观看| 国产一区二区中文字幕| 欧美日精品一区视频| 激情综合网最新| 成人av网站在线观看| 欧美高清视频www夜色资源网| 国产亚洲女人久久久久毛片| 午夜伊人狠狠久久| 91在线视频免费91| 久久免费的精品国产v∧| 午夜精品久久久久久久99水蜜桃| 国产成人精品一区二区三区四区| 欧美日韩在线一区二区| 亚洲国产精品二十页| 青椒成人免费视频| 欧洲激情一区二区| 国产精品入口麻豆原神| 老司机精品视频线观看86| 在线观看一区二区精品视频| 日本一区二区三区高清不卡| 毛片基地黄久久久久久天堂| 欧美日韩精品欧美日韩精品一| 国产精品进线69影院| 国产中文一区二区三区| 91精品久久久久久久91蜜桃| 亚洲午夜一区二区三区| 色综合天天综合色综合av| 婷婷综合另类小说色区| 99热99精品| 中国av一区二区三区| 九色|91porny| 日韩欧美在线123| 五月综合激情日本mⅴ| 色综合久久中文综合久久牛| 中文文精品字幕一区二区| 久久99久久99精品免视看婷婷 | 在线观看国产精品网站| 欧美激情综合在线| 国产一区啦啦啦在线观看| 欧美一区二区久久| 亚洲成av人影院| 欧美视频中文字幕| 亚洲日本一区二区| 色综合久久久久久久| 国产精品久久午夜| 成人国产精品免费观看视频| 国产喷白浆一区二区三区| 国产精品一卡二卡| 国产欧美一区视频| 成人性生交大片| 国产精品美女www爽爽爽| 国产99久久久国产精品潘金| 国产香蕉久久精品综合网| 国产成人一级电影| 国产三级一区二区| 成人黄色777网| 自拍偷拍亚洲综合| 在线一区二区三区做爰视频网站| 亚洲日本中文字幕区| 国产精品第四页| 色播五月激情综合网| 一级特黄大欧美久久久| 欧美日韩国产高清一区| 免费黄网站欧美| 精品免费日韩av| 国产精品2024| 亚洲欧美在线观看| 日本高清无吗v一区| 香蕉久久夜色精品国产使用方法 | 久久一区二区视频| 国产在线播放一区| 中文字幕在线观看一区| 色综合天天性综合| 亚洲va韩国va欧美va| 欧美一区二区三区喷汁尤物| 久久电影网站中文字幕 | av成人免费在线观看| 亚洲三级在线看| 欧美日韩美女一区二区| 美女一区二区视频| 国产日韩欧美精品综合| 91看片淫黄大片一级在线观看| 亚洲影院在线观看| 日韩欧美综合在线| 国产毛片精品视频| ...av二区三区久久精品| 欧美亚洲一区二区在线观看| 人妖欧美一区二区| 日本一区二区三区电影| 欧美午夜寂寞影院| 久久99蜜桃精品| 国产精品福利在线播放| 欧美日韩国产一区| 久久综合五月天婷婷伊人| 成人av午夜影院| 亚洲成人av资源| 国产色综合一区| 欧美人xxxx| 不卡视频在线看| 日韩黄色免费网站| 国产精品情趣视频| 91精品国产综合久久蜜臀| 福利电影一区二区| 五月婷婷综合网| 国产精品系列在线| 欧美一区二区在线观看| 本田岬高潮一区二区三区| 午夜国产精品影院在线观看| 国产精品日日摸夜夜摸av| 欧美一区二区三区四区久久| 成人免费视频网站在线观看| 日韩**一区毛片| 亚洲激情自拍偷拍| 国产无人区一区二区三区| 欧美日韩成人在线| 成人av电影在线观看| 九九视频精品免费| 亚洲不卡av一区二区三区| 中文字幕亚洲欧美在线不卡| 日韩欧美电影一区| 欧美日韩在线免费视频| 成人免费av在线| 精品一区二区三区久久久| 亚洲午夜国产一区99re久久| 国产精品美女一区二区三区| 亚洲精品在线三区| 欧美一区二区在线视频| 欧美在线你懂的| a4yy欧美一区二区三区| 国产一区二区三区免费看| 天堂影院一区二区| 伊人开心综合网| 《视频一区视频二区| 久久九九全国免费| 日韩精品一区二区三区在线播放| 欧美视频在线观看一区二区| 成人短视频下载| 国产乱码一区二区三区| 蜜臀av一区二区| 天天影视涩香欲综合网 | 欧美一区二区三区在线电影| 在线观看欧美日本| 色婷婷av一区二区三区之一色屋| 国产不卡一区视频| 欧美在线观看18| 亚洲一级二级三级| 91在线观看免费视频| 9人人澡人人爽人人精品| 美女视频免费一区| 无码av中文一区二区三区桃花岛| **欧美大码日韩| 国产精品毛片久久久久久久| 欧美国产精品专区| 欧美国产日韩亚洲一区| 久久久久99精品国产片| 久久亚洲二区三区| 久久夜色精品国产欧美乱极品| www久久精品| 久久久久久久久久久久电影| 日韩欧美专区在线| 欧美刺激午夜性久久久久久久| 在线综合视频播放| 日韩欧美不卡在线观看视频| 日韩一级在线观看| 欧美v国产在线一区二区三区| 日韩免费高清电影| 久久无码av三级| 久久久久国产一区二区三区四区| 久久久久久免费网| 国产欧美日韩麻豆91| 日本一区二区三级电影在线观看| 国产女同互慰高潮91漫画| 中文字幕乱码一区二区免费| 中文字幕日本乱码精品影院| 亚洲美女电影在线| 亚洲国产你懂的| 日本欧美在线观看| 久久精品999| 国产成人免费在线视频| 99久久久精品| 欧美日韩另类一区|