亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? kernels.cpp

?? JPEG2000壓縮解壓圖像源碼
?? CPP
字號:
/*****************************************************************************/// File: kernels.cpp [scope = CORESYS/DWT-KERNELS]// Version: Kakadu, V2.2// Author: David Taubman// Last Revised: 20 June, 2001/*****************************************************************************/// Copyright 2001, David Taubman, The University of New South Wales (UNSW)// The copyright owner is Unisearch Ltd, Australia (commercial arm of UNSW)// Neither this copyright statement, nor the licensing details below// may be removed from this file or dissociated from its contents./*****************************************************************************/// Licensee: Book Owner// License number: 99999// The Licensee has been granted a NON-COMMERCIAL license to the contents of// this source file, said Licensee being the owner of a copy of the book,// "JPEG2000: Image Compression Fundamentals, Standards and Practice," by// Taubman and Marcellin (Kluwer Academic Publishers, 2001).  A brief summary// of the license appears below.  This summary is not to be relied upon in// preference to the full text of the license agreement, which was accepted// upon breaking the seal of the compact disc accompanying the above-mentioned// book.// 1. The Licensee has the right to Non-Commercial Use of the Kakadu software,//    Version 2.2, including distribution of one or more Applications built//    using the software, provided such distribution is not for financial//    return.// 2. The Licensee has the right to personal use of the Kakadu software,//    Version 2.2.// 3. The Licensee has the right to distribute Reusable Code (including//    source code and dynamically or statically linked libraries) to a Third//    Party, provided the Third Party possesses a license to use the Kakadu//    software, Version 2.2, and provided such distribution is not for//    financial return./******************************************************************************Description:   Implements the services defined by "kdu_kernels.h"******************************************************************************/#include <assert.h>#include <math.h>#include "kdu_elementary.h"#include "kdu_messaging.h"#include "kdu_params.h"#include "kdu_kernels.h"/* ========================================================================= *//*                                kdu_kernels                                *//* ========================================================================= *//*****************************************************************************//*                          kdu_kernels::kdu_kernels                         *//*****************************************************************************/kdu_kernels::kdu_kernels(int kernel_id, bool reversible){  this->kernel_id = kernel_id;  this->downshifts = NULL;  if (kernel_id == Ckernels_W5X3)    {      num_steps = 2;      lifting_factors = new float[num_steps];      lifting_factors[0] = -0.5F;      lifting_factors[1] = 0.25F;      if (reversible)        {          downshifts = new int[num_steps];          downshifts[0] = 1;          downshifts[1] = 2;        }    }  else if (kernel_id == Ckernels_W9X7)    {      num_steps = 4;      lifting_factors = new float[num_steps];      if (reversible)        { kdu_error e; e << "The W9X7 kernel may not be used for reversible "          "compression!"; }      lifting_factors[0] = (float) -1.586134342;      lifting_factors[1] = (float) -0.052980118;      lifting_factors[2] = (float) 0.882911075;      lifting_factors[3] = (float)  0.443506852;    }  else    { kdu_error e; e << "Illegal DWT kernel ID used to construct a "      "`kdu_kernels' object."; }  // Now let's derive all the remaining quantities.  low_analysis_L   = num_steps;   // These lengths may be pessimistic  high_analysis_L  = num_steps-1; // if one or more of the lifting factors  low_synthesis_L  = num_steps-1; // is equal to 0.  high_synthesis_L = num_steps;    low_analysis_taps = (new float[2*low_analysis_L+1])+low_analysis_L;  high_analysis_taps = (new float[2*high_analysis_L+1])+high_analysis_L;  low_synthesis_taps = (new float[2*low_synthesis_L+1])+low_synthesis_L;  high_synthesis_taps = (new float[2*high_synthesis_L+1])+high_synthesis_L;  // Initialize the vector expansion buffers.  max_expansion_levels = 4;  work_L = num_steps+1; // Allow for placing the input impulse at n=1 or n=0.  for (int d=1; d < max_expansion_levels; d++)    work_L = work_L*2 + num_steps;  work1 = (new float[2*work_L+1]) + work_L;  work2 = (new float[2*work_L+1]) + work_L;  bibo_step_gains = new double[num_steps];  // Deduce synthesis impulse responses, without scaling factors.  int n, k;  for (n=0; n <= work_L; n++)    work1[n] = work1[-n] = 0.0F;  work1[0]=1.0F; // Simulate an impulse in the low-pass subband.  for (k=num_steps-1; k >= 0; k--)    {      if (k&1)        n = -(num_steps & (~1)); // Smallest even integer >= -num_steps.      else        n = ((-num_steps) & (~1)) + 1; // Smallest odd integer >= -num_steps.      for (; n <= num_steps; n+=2)        work1[n] -= lifting_factors[k]*(work1[n-1]+work1[n+1]);    }  for (n=0; n <= low_synthesis_L; n++)    low_synthesis_taps[n] = low_synthesis_taps[-n] = work1[n];  for (n=0; n <= work_L; n++)    work1[n] = work1[-n] = 0.0F;  work1[1]=1.0F; // Simulate an impulse in the high-pass subband.  for (k=num_steps-1; k >= 0; k--)    {      if (k&1)        n = -(num_steps & (~1)); // Smallest even integer >= -num_steps.      else        n = ((-num_steps) & (~1)) + 1; // Smallest odd integer >= -num_steps.      for (; n <= (num_steps+1); n+=2)        work1[n] -= lifting_factors[k]*(work1[n-1]+work1[n+1]);    }  for (n=0; n <= high_synthesis_L; n++)    high_synthesis_taps[n] = high_synthesis_taps[-n] = work1[n+1];  // Deduce analysis kernels from synthesis kernels.  float sign_flip;  for (sign_flip=1.0F, n=0; n <= low_analysis_L; n++, sign_flip=-sign_flip)    low_analysis_taps[n] = low_analysis_taps[-n] =      sign_flip*high_synthesis_taps[n];  for (sign_flip=1.0F, n=0; n <= high_analysis_L; n++, sign_flip=-sign_flip)    high_analysis_taps[n] = high_analysis_taps[-n] =      sign_flip*low_synthesis_taps[n];  // Deduce scaling factors and normalize filter taps.  if (reversible)    { low_scale = high_scale = 1.0F; return; }  float gain;  gain=low_analysis_taps[0];  for (n=1; n <= low_analysis_L; n++)    gain += 2*low_analysis_taps[n];  low_scale = 1.0F / gain;  for (n=-low_analysis_L; n <= low_analysis_L; n++)    low_analysis_taps[n] *= low_scale;  for (n=-low_synthesis_L; n <= low_synthesis_L; n++)    low_synthesis_taps[n] *= gain;  gain = high_analysis_taps[0];  for (sign_flip=-1.0F, n=1; n <= high_analysis_L; n++, sign_flip=-sign_flip)    gain += 2*sign_flip*high_analysis_taps[n];  high_scale = 1.0F / gain;  for (n=-high_analysis_L; n <= high_analysis_L; n++)    high_analysis_taps[n] *= high_scale;  for (n=-high_synthesis_L; n <= high_synthesis_L; n++)    high_synthesis_taps[n] *= gain;}/*****************************************************************************//*                          kdu_kernels::~kdu_kernels                        *//*****************************************************************************/kdu_kernels::~kdu_kernels(){  if (downshifts != NULL)    delete[] downshifts;  delete[] lifting_factors;  delete[] (low_analysis_taps-low_analysis_L);  delete[] (high_analysis_taps-high_analysis_L);  delete[] (low_synthesis_taps-low_synthesis_L);  delete[] (high_synthesis_taps-high_synthesis_L);  delete[] (work1-work_L);  delete[] (work2-work_L);  delete[] bibo_step_gains;}/*****************************************************************************//*                      kdu_kernels::get_lifting_factors                     *//*****************************************************************************/float *  kdu_kernels::get_lifting_factors(int &num_steps,                                   float &low_scale, float &high_scale){  num_steps = this->num_steps;  low_scale = this->low_scale;  high_scale = this->high_scale;  return lifting_factors;}/*****************************************************************************//*                     kdu_kernels::get_impulse_response                     *//*****************************************************************************/float *  kdu_kernels::get_impulse_response(kdu_kernel_type which, int &half_length){  switch (which) {    case KDU_ANALYSIS_LOW:      half_length = low_analysis_L;      return low_analysis_taps;    case KDU_ANALYSIS_HIGH:      half_length = high_analysis_L;      return high_analysis_taps;    case KDU_SYNTHESIS_LOW:      half_length = low_synthesis_L;      return low_synthesis_taps;    case KDU_SYNTHESIS_HIGH:      half_length = high_synthesis_L;      return high_synthesis_taps;    default:      assert(0);    }  return NULL;}/*****************************************************************************//*                        kdu_kernels::get_energy_gain                       *//*****************************************************************************/double  kdu_kernels::get_energy_gain(kdu_kernel_type which, int level_idx){  if (level_idx == 0)    return (which==KDU_SYNTHESIS_LOW)?1.0:0.0;  int extra_levels = level_idx - max_expansion_levels;  if (extra_levels < 0)    extra_levels = 0;   else    level_idx -= extra_levels;  int L, n, k;  if (which == KDU_SYNTHESIS_LOW)    {      L = low_synthesis_L;      for (n=-L; n <= L; n++)        work1[n] = low_synthesis_taps[n];    }  else if (which == KDU_SYNTHESIS_HIGH)    {      L = high_synthesis_L;      for (n=-L; n <= L; n++)        work1[n] = high_synthesis_taps[n];    }  else    assert(0); // Function only computes synthesis energy gains.  for (level_idx--; level_idx > 0; level_idx--)    {      float *tbuf=work1; work1=work2; work2=tbuf;      int new_L = 2*L + low_synthesis_L;      assert(new_L <= work_L);      for (n=-new_L; n <= new_L; n++)        work1[n] = 0.0F;      for (n=-L; n <= L; n++)        for (k=-low_synthesis_L; k <= low_synthesis_L; k++)          work1[2*n+k] += work2[n]*low_synthesis_taps[k];      L = new_L;    }  double val, energy = 0.0;  for (n=-L; n <= L; n++)    {      val = work1[n];      energy += val*val;    }  while (extra_levels--)    energy *= 2.0;  return energy;}/*****************************************************************************//*                         kdu_kernels::get_bibo_gains                       *//*****************************************************************************/double *  kdu_kernels::get_bibo_gains(int level_idx,                              double &low_gain, double &high_gain){  if (level_idx == 0)    {      low_gain = 1.0;      high_gain = 0.0;      return NULL;    }  if (level_idx > max_expansion_levels)    level_idx = max_expansion_levels;  float *work_low=work1, *work_high=work2;  // In the seqel, `work_low' will hold the analysis kernel used to compute  // the even sub-sequence entry at location 0, while `work_high' will hold the  // analysis kernels used to compute the odd sub-sequence entry at location  // 1.  The lifting procedure is followed to alternately update these  // analysis kernels.  int k, lev, low_L, high_L, gap;  // Initialize analysis vectors and gains for a 1 level lazy wavelet  for (k=-work_L; k <= work_L; k++)    work_low[k] = work_high[k] = 0.0F;  work_low[0] = 1.0F;  low_L = high_L = 0;  low_gain = high_gain = 1.0;  for (gap=1, lev=1; lev <= level_idx; lev++, gap<<=1)    { // Work through the levels      /* Copy the low analysis vector from the last level to the high analysis         vector for the current level. */      for (k=0; k <= low_L; k++)        work_high[k] = work_high[-k] = work_low[k];      for (; k <= high_L; k++)        work_high[k] = work_high[-k] = 0.0F;      high_L = low_L;      high_gain = low_gain;      for (int step=0; step < num_steps; step+=2)        { // Work through the lifting steps in this level          float factor;          // Start by updating the odd sub-sequence analysis kernel          factor = lifting_factors[step];          assert((low_L+gap) <= work_L);          for (k=-low_L; k <= low_L; k++)            {              work_high[k-gap] += work_low[k]*factor;              work_high[k+gap] += work_low[k]*factor;            }          high_L = ((low_L+gap) > high_L)?(low_L+gap):high_L;          for (high_gain=0.0, k=-high_L; k <= high_L; k++)            high_gain += fabs(work_high[k]);          bibo_step_gains[step] = high_gain;                    // Now update the even sub-sequence analysis kernel          if ((step+1) < num_steps)            {              factor = lifting_factors[step+1];              assert((high_L+gap) <= work_L);              for (k=-high_L; k <= high_L; k++)                {                  work_low[k-gap] += work_high[k]*factor;                  work_low[k+gap] += work_high[k]*factor;                }              low_L = ((high_L+gap) > low_L)?(high_L+gap):low_L;              for (low_gain=0.0, k=-low_L; k <= low_L; k++)                low_gain += fabs(work_low[k]);              bibo_step_gains[step+1] = low_gain;            }        }      // Now incorporate the subband scaling factors      for (k=-high_L; k <= high_L; k++)        work_high[k] *= high_scale;      high_gain *= high_scale;      for (k=-low_L; k <= low_L; k++)        work_low[k] *= low_scale;      low_gain *= low_scale;    }  return bibo_step_gains;}

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
丁香婷婷综合色啪| 麻豆免费看一区二区三区| 丁香婷婷深情五月亚洲| 久久久精品人体av艺术| 韩国女主播一区| 国产亚洲精品bt天堂精选| 国产一区二区成人久久免费影院| 久久蜜桃av一区二区天堂| 激情六月婷婷久久| 国产欧美日韩在线| 91蜜桃免费观看视频| 亚洲午夜国产一区99re久久| 欧美麻豆精品久久久久久| 亚洲综合一区二区| 欧美一区二区不卡视频| 狠狠色狠狠色综合系列| 久久综合一区二区| 91在线精品秘密一区二区| 亚洲一区二三区| 欧美老肥妇做.爰bbww| 免费观看成人av| 欧美精品一区男女天堂| 粉嫩av一区二区三区粉嫩 | 粉嫩高潮美女一区二区三区| 国产欧美综合色| 91视频在线看| 日韩av电影一区| 久久久蜜桃精品| 欧美中文字幕久久| 国产一区二区网址| 亚洲精选视频在线| 日韩精品一区二区三区中文精品| 国产精品888| 午夜精品国产更新| 中文字幕成人av| 欧美高清激情brazzers| 国产剧情一区二区三区| 亚洲小说春色综合另类电影| 337p日本欧洲亚洲大胆精品| 91黄色免费看| 国产一区二区免费视频| 亚洲一区视频在线观看视频| 久久久噜噜噜久久中文字幕色伊伊| www.成人网.com| 狠狠色丁香久久婷婷综合_中| 国产精品久久三区| 欧美v亚洲v综合ⅴ国产v| 97久久精品人人爽人人爽蜜臀| 青青青爽久久午夜综合久久午夜| 国产精品福利一区二区三区| 日韩精品一区二区三区四区视频| 色婷婷av一区| 国产成人一级电影| 美国av一区二区| 亚洲成a人在线观看| 中文字幕中文字幕中文字幕亚洲无线| 在线不卡免费欧美| 在线区一区二视频| 成人激情动漫在线观看| 另类小说色综合网站| 亚洲一区二区三区自拍| 国产精品乱码妇女bbbb| 日韩欧美在线网站| 欧美日韩大陆在线| 色菇凉天天综合网| av日韩在线网站| 成人免费看的视频| 国产黑丝在线一区二区三区| 蜜臀av一区二区在线观看| 偷拍日韩校园综合在线| 一区二区三区毛片| 亚洲激情男女视频| 亚洲欧美偷拍卡通变态| 中文字幕佐山爱一区二区免费| 中文一区在线播放| 国产精品无人区| 国产精品每日更新| 日韩一区欧美小说| 亚洲视频1区2区| 亚洲六月丁香色婷婷综合久久 | 东方欧美亚洲色图在线| 九九视频精品免费| 激情小说欧美图片| 韩日av一区二区| 精品在线观看视频| 狠狠色综合日日| 国产乱对白刺激视频不卡| 国产成人三级在线观看| 国产高清不卡二三区| 国产黄色成人av| 成av人片一区二区| 91网上在线视频| 欧美在线一二三| 欧美高清性hdvideosex| 日韩欧美123| 久久久99久久| 国产精品三级久久久久三级| 一区在线观看视频| 亚洲一区二区成人在线观看| 天天操天天色综合| 久久99热99| 成人午夜精品在线| 色悠悠久久综合| 欧美日韩aaaaaa| 欧美成人a∨高清免费观看| 久久视频一区二区| 国产精品福利av| 亚洲成人一区在线| 麻豆精品视频在线| 风间由美一区二区av101| 99久久精品免费| 91麻豆精品国产综合久久久久久 | 26uuu亚洲| 国产精品理伦片| 天天综合天天做天天综合| 精品亚洲国内自在自线福利| eeuss鲁片一区二区三区在线看 | 欧美日韩精品电影| 久久综合九色综合97婷婷女人| 国产精品福利一区二区三区| 午夜精品影院在线观看| 国产一区二区三区免费| 91在线观看成人| 日韩一区二区在线观看视频播放| 欧美激情一区在线| 日本不卡不码高清免费观看| 成人av电影免费在线播放| 9191精品国产综合久久久久久| 久久久三级国产网站| 亚洲一区二区三区中文字幕在线| 久久99精品国产.久久久久久 | 国产精品国模大尺度视频| 午夜久久电影网| 成人丝袜高跟foot| 日韩一卡二卡三卡四卡| 亚洲色图一区二区三区| 精品一区免费av| 欧美丝袜丝交足nylons图片| 欧美—级在线免费片| 日韩综合小视频| 91在线视频网址| 久久先锋影音av| 日本欧美一区二区| 欧美三级在线看| 亚洲视频精选在线| 成人一级视频在线观看| 精品欧美乱码久久久久久 | 国产欧美视频在线观看| 热久久一区二区| 欧美怡红院视频| 国产精品动漫网站| 国产一区二区三区蝌蚪| 91精品免费在线观看| 自拍偷拍国产精品| 丁香婷婷深情五月亚洲| 久久综合九色综合欧美就去吻 | 欧美成人国产一区二区| 亚洲国产精品影院| 91免费视频网| 18成人在线观看| 国产91清纯白嫩初高中在线观看| 日韩一区二区三区视频| 首页综合国产亚洲丝袜| 欧美在线综合视频| 亚洲香肠在线观看| 色屁屁一区二区| 亚洲美女在线国产| 91在线视频免费观看| 亚洲人妖av一区二区| 成人18视频日本| 中文字幕日韩欧美一区二区三区| 国产高清不卡一区| 欧美韩国一区二区| 9人人澡人人爽人人精品| 久久久国产一区二区三区四区小说 | 日韩综合在线视频| 日韩精品中文字幕在线不卡尤物| 美女被吸乳得到大胸91| 精品奇米国产一区二区三区| 久久精品国产999大香线蕉| 日韩欧美一区二区在线视频| 美女被吸乳得到大胸91| 日韩免费一区二区三区在线播放| 日韩电影网1区2区| 日韩欧美中文字幕一区| 国产久卡久卡久卡久卡视频精品| 久久影院视频免费| 国产成人av福利| 亚洲男人天堂av| 欧美日韩dvd在线观看| 激情综合亚洲精品| 国产精品情趣视频| 91国偷自产一区二区开放时间| 亚洲电影一级片| 日韩精品中午字幕| av中文字幕不卡| 亚洲大尺度视频在线观看| 精品区一区二区| 91麻豆swag| 日本亚洲电影天堂|