亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? kernels.cpp

?? 該源碼是JPEG2000的c++源代碼,希望對研究JPEG2000標準以及編解碼的朋友們有用.
?? CPP
字號:
/*****************************************************************************/// File: kernels.cpp [scope = CORESYS/DWT-KERNELS]// Version: Kakadu, V2.2// Author: David Taubman// Last Revised: 20 June, 2001/*****************************************************************************/// Copyright 2001, David Taubman, The University of New South Wales (UNSW)// The copyright owner is Unisearch Ltd, Australia (commercial arm of UNSW)// Neither this copyright statement, nor the licensing details below// may be removed from this file or dissociated from its contents./*****************************************************************************/// Licensee: Book Owner// License number: 99999// The Licensee has been granted a NON-COMMERCIAL license to the contents of// this source file, said Licensee being the owner of a copy of the book,// "JPEG2000: Image Compression Fundamentals, Standards and Practice," by// Taubman and Marcellin (Kluwer Academic Publishers, 2001).  A brief summary// of the license appears below.  This summary is not to be relied upon in// preference to the full text of the license agreement, which was accepted// upon breaking the seal of the compact disc accompanying the above-mentioned// book.// 1. The Licensee has the right to Non-Commercial Use of the Kakadu software,//    Version 2.2, including distribution of one or more Applications built//    using the software, provided such distribution is not for financial//    return.// 2. The Licensee has the right to personal use of the Kakadu software,//    Version 2.2.// 3. The Licensee has the right to distribute Reusable Code (including//    source code and dynamically or statically linked libraries) to a Third//    Party, provided the Third Party possesses a license to use the Kakadu//    software, Version 2.2, and provided such distribution is not for//    financial return./******************************************************************************Description:   Implements the services defined by "kdu_kernels.h"******************************************************************************/#include <assert.h>#include <math.h>#include "kdu_elementary.h"#include "kdu_messaging.h"#include "kdu_params.h"#include "kdu_kernels.h"/* ========================================================================= *//*                                kdu_kernels                                *//* ========================================================================= *//*****************************************************************************//*                          kdu_kernels::kdu_kernels                         *//*****************************************************************************/kdu_kernels::kdu_kernels(int kernel_id, bool reversible){  this->kernel_id = kernel_id;  this->downshifts = NULL;  if (kernel_id == Ckernels_W5X3)    {      num_steps = 2;      lifting_factors = new float[num_steps];      lifting_factors[0] = -0.5F;      lifting_factors[1] = 0.25F;      if (reversible)        {          downshifts = new int[num_steps];          downshifts[0] = 1;          downshifts[1] = 2;        }    }  else if (kernel_id == Ckernels_W9X7)    {      num_steps = 4;      lifting_factors = new float[num_steps];      if (reversible)        { kdu_error e; e << "The W9X7 kernel may not be used for reversible "          "compression!"; }      lifting_factors[0] = (float) -1.586134342;      lifting_factors[1] = (float) -0.052980118;      lifting_factors[2] = (float) 0.882911075;      lifting_factors[3] = (float)  0.443506852;    }  else    { kdu_error e; e << "Illegal DWT kernel ID used to construct a "      "`kdu_kernels' object."; }  // Now let's derive all the remaining quantities.  low_analysis_L   = num_steps;   // These lengths may be pessimistic  high_analysis_L  = num_steps-1; // if one or more of the lifting factors  low_synthesis_L  = num_steps-1; // is equal to 0.  high_synthesis_L = num_steps;    low_analysis_taps = (new float[2*low_analysis_L+1])+low_analysis_L;  high_analysis_taps = (new float[2*high_analysis_L+1])+high_analysis_L;  low_synthesis_taps = (new float[2*low_synthesis_L+1])+low_synthesis_L;  high_synthesis_taps = (new float[2*high_synthesis_L+1])+high_synthesis_L;  // Initialize the vector expansion buffers.  max_expansion_levels = 4;  work_L = num_steps+1; // Allow for placing the input impulse at n=1 or n=0.  for (int d=1; d < max_expansion_levels; d++)    work_L = work_L*2 + num_steps;  work1 = (new float[2*work_L+1]) + work_L;  work2 = (new float[2*work_L+1]) + work_L;  bibo_step_gains = new double[num_steps];  // Deduce synthesis impulse responses, without scaling factors.  int n, k;  for (n=0; n <= work_L; n++)    work1[n] = work1[-n] = 0.0F;  work1[0]=1.0F; // Simulate an impulse in the low-pass subband.  for (k=num_steps-1; k >= 0; k--)    {      if (k&1)        n = -(num_steps & (~1)); // Smallest even integer >= -num_steps.      else        n = ((-num_steps) & (~1)) + 1; // Smallest odd integer >= -num_steps.      for (; n <= num_steps; n+=2)        work1[n] -= lifting_factors[k]*(work1[n-1]+work1[n+1]);    }  for (n=0; n <= low_synthesis_L; n++)    low_synthesis_taps[n] = low_synthesis_taps[-n] = work1[n];  for (n=0; n <= work_L; n++)    work1[n] = work1[-n] = 0.0F;  work1[1]=1.0F; // Simulate an impulse in the high-pass subband.  for (k=num_steps-1; k >= 0; k--)    {      if (k&1)        n = -(num_steps & (~1)); // Smallest even integer >= -num_steps.      else        n = ((-num_steps) & (~1)) + 1; // Smallest odd integer >= -num_steps.      for (; n <= (num_steps+1); n+=2)        work1[n] -= lifting_factors[k]*(work1[n-1]+work1[n+1]);    }  for (n=0; n <= high_synthesis_L; n++)    high_synthesis_taps[n] = high_synthesis_taps[-n] = work1[n+1];  // Deduce analysis kernels from synthesis kernels.  float sign_flip;  for (sign_flip=1.0F, n=0; n <= low_analysis_L; n++, sign_flip=-sign_flip)    low_analysis_taps[n] = low_analysis_taps[-n] =      sign_flip*high_synthesis_taps[n];  for (sign_flip=1.0F, n=0; n <= high_analysis_L; n++, sign_flip=-sign_flip)    high_analysis_taps[n] = high_analysis_taps[-n] =      sign_flip*low_synthesis_taps[n];  // Deduce scaling factors and normalize filter taps.  if (reversible)    { low_scale = high_scale = 1.0F; return; }  float gain;  gain=low_analysis_taps[0];  for (n=1; n <= low_analysis_L; n++)    gain += 2*low_analysis_taps[n];  low_scale = 1.0F / gain;  for (n=-low_analysis_L; n <= low_analysis_L; n++)    low_analysis_taps[n] *= low_scale;  for (n=-low_synthesis_L; n <= low_synthesis_L; n++)    low_synthesis_taps[n] *= gain;  gain = high_analysis_taps[0];  for (sign_flip=-1.0F, n=1; n <= high_analysis_L; n++, sign_flip=-sign_flip)    gain += 2*sign_flip*high_analysis_taps[n];  high_scale = 1.0F / gain;  for (n=-high_analysis_L; n <= high_analysis_L; n++)    high_analysis_taps[n] *= high_scale;  for (n=-high_synthesis_L; n <= high_synthesis_L; n++)    high_synthesis_taps[n] *= gain;}/*****************************************************************************//*                          kdu_kernels::~kdu_kernels                        *//*****************************************************************************/kdu_kernels::~kdu_kernels(){  if (downshifts != NULL)    delete[] downshifts;  delete[] lifting_factors;  delete[] (low_analysis_taps-low_analysis_L);  delete[] (high_analysis_taps-high_analysis_L);  delete[] (low_synthesis_taps-low_synthesis_L);  delete[] (high_synthesis_taps-high_synthesis_L);  delete[] (work1-work_L);  delete[] (work2-work_L);  delete[] bibo_step_gains;}/*****************************************************************************//*                      kdu_kernels::get_lifting_factors                     *//*****************************************************************************/float *  kdu_kernels::get_lifting_factors(int &num_steps,                                   float &low_scale, float &high_scale){  num_steps = this->num_steps;  low_scale = this->low_scale;  high_scale = this->high_scale;  return lifting_factors;}/*****************************************************************************//*                     kdu_kernels::get_impulse_response                     *//*****************************************************************************/float *  kdu_kernels::get_impulse_response(kdu_kernel_type which, int &half_length){  switch (which) {    case KDU_ANALYSIS_LOW:      half_length = low_analysis_L;      return low_analysis_taps;    case KDU_ANALYSIS_HIGH:      half_length = high_analysis_L;      return high_analysis_taps;    case KDU_SYNTHESIS_LOW:      half_length = low_synthesis_L;      return low_synthesis_taps;    case KDU_SYNTHESIS_HIGH:      half_length = high_synthesis_L;      return high_synthesis_taps;    default:      assert(0);    }  return NULL;}/*****************************************************************************//*                        kdu_kernels::get_energy_gain                       *//*****************************************************************************/double  kdu_kernels::get_energy_gain(kdu_kernel_type which, int level_idx){  if (level_idx == 0)    return (which==KDU_SYNTHESIS_LOW)?1.0:0.0;  int extra_levels = level_idx - max_expansion_levels;  if (extra_levels < 0)    extra_levels = 0;   else    level_idx -= extra_levels;  int L, n, k;  if (which == KDU_SYNTHESIS_LOW)    {      L = low_synthesis_L;      for (n=-L; n <= L; n++)        work1[n] = low_synthesis_taps[n];    }  else if (which == KDU_SYNTHESIS_HIGH)    {      L = high_synthesis_L;      for (n=-L; n <= L; n++)        work1[n] = high_synthesis_taps[n];    }  else    assert(0); // Function only computes synthesis energy gains.  for (level_idx--; level_idx > 0; level_idx--)    {      float *tbuf=work1; work1=work2; work2=tbuf;      int new_L = 2*L + low_synthesis_L;      assert(new_L <= work_L);      for (n=-new_L; n <= new_L; n++)        work1[n] = 0.0F;      for (n=-L; n <= L; n++)        for (k=-low_synthesis_L; k <= low_synthesis_L; k++)          work1[2*n+k] += work2[n]*low_synthesis_taps[k];      L = new_L;    }  double val, energy = 0.0;  for (n=-L; n <= L; n++)    {      val = work1[n];      energy += val*val;    }  while (extra_levels--)    energy *= 2.0;  return energy;}/*****************************************************************************//*                         kdu_kernels::get_bibo_gains                       *//*****************************************************************************/double *  kdu_kernels::get_bibo_gains(int level_idx,                              double &low_gain, double &high_gain){  if (level_idx == 0)    {      low_gain = 1.0;      high_gain = 0.0;      return NULL;    }  if (level_idx > max_expansion_levels)    level_idx = max_expansion_levels;  float *work_low=work1, *work_high=work2;  // In the seqel, `work_low' will hold the analysis kernel used to compute  // the even sub-sequence entry at location 0, while `work_high' will hold the  // analysis kernels used to compute the odd sub-sequence entry at location  // 1.  The lifting procedure is followed to alternately update these  // analysis kernels.  int k, lev, low_L, high_L, gap;  // Initialize analysis vectors and gains for a 1 level lazy wavelet  for (k=-work_L; k <= work_L; k++)    work_low[k] = work_high[k] = 0.0F;  work_low[0] = 1.0F;  low_L = high_L = 0;  low_gain = high_gain = 1.0;  for (gap=1, lev=1; lev <= level_idx; lev++, gap<<=1)    { // Work through the levels      /* Copy the low analysis vector from the last level to the high analysis         vector for the current level. */      for (k=0; k <= low_L; k++)        work_high[k] = work_high[-k] = work_low[k];      for (; k <= high_L; k++)        work_high[k] = work_high[-k] = 0.0F;      high_L = low_L;      high_gain = low_gain;      for (int step=0; step < num_steps; step+=2)        { // Work through the lifting steps in this level          float factor;          // Start by updating the odd sub-sequence analysis kernel          factor = lifting_factors[step];          assert((low_L+gap) <= work_L);          for (k=-low_L; k <= low_L; k++)            {              work_high[k-gap] += work_low[k]*factor;              work_high[k+gap] += work_low[k]*factor;            }          high_L = ((low_L+gap) > high_L)?(low_L+gap):high_L;          for (high_gain=0.0, k=-high_L; k <= high_L; k++)            high_gain += fabs(work_high[k]);          bibo_step_gains[step] = high_gain;                    // Now update the even sub-sequence analysis kernel          if ((step+1) < num_steps)            {              factor = lifting_factors[step+1];              assert((high_L+gap) <= work_L);              for (k=-high_L; k <= high_L; k++)                {                  work_low[k-gap] += work_high[k]*factor;                  work_low[k+gap] += work_high[k]*factor;                }              low_L = ((high_L+gap) > low_L)?(high_L+gap):low_L;              for (low_gain=0.0, k=-low_L; k <= low_L; k++)                low_gain += fabs(work_low[k]);              bibo_step_gains[step+1] = low_gain;            }        }      // Now incorporate the subband scaling factors      for (k=-high_L; k <= high_L; k++)        work_high[k] *= high_scale;      high_gain *= high_scale;      for (k=-low_L; k <= low_L; k++)        work_low[k] *= low_scale;      low_gain *= low_scale;    }  return bibo_step_gains;}

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
亚洲视频一二三区| 欧美成人a视频| 亚洲精品一二三| www.亚洲在线| 亚洲在线视频一区| 欧美性生活大片视频| 亚洲不卡一区二区三区| 在线成人免费视频| 激情欧美一区二区三区在线观看| 精品sm捆绑视频| 成人免费看视频| 亚洲乱码国产乱码精品精小说 | 精品中文字幕一区二区 | 久久综合久久鬼色| aaa欧美大片| 亚洲电影中文字幕在线观看| 欧美日韩国产免费一区二区| 九一久久久久久| 中文字幕一区在线观看| 欧美日韩一区二区在线观看| 精品一区二区免费看| 中文字幕制服丝袜一区二区三区| 欧美色中文字幕| 国产在线视频一区二区三区| 亚洲欧洲99久久| 日韩亚洲欧美在线观看| 国产不卡免费视频| 亚洲第一av色| 国产欧美日韩在线视频| 欧美网站一区二区| 国产一区二区三区久久久| 亚洲桃色在线一区| 国产精品久久久久久久第一福利| 91色综合久久久久婷婷| 久久99国产精品久久99| 亚洲精品国产a| 国产午夜精品在线观看| 欧美日韩国产另类不卡| www.亚洲精品| 紧缚捆绑精品一区二区| 亚洲一二三区在线观看| 欧美国产一区二区| 日韩欧美一区二区三区在线| www.日本不卡| 激情久久五月天| 亚洲成av人片一区二区三区| 欧美激情一区二区三区蜜桃视频 | 欧美高清激情brazzers| 成人黄页在线观看| 久久精品国产999大香线蕉| 亚洲九九爱视频| 国产三区在线成人av| 欧美一级片在线看| 日本韩国精品一区二区在线观看| 国产麻豆视频一区二区| 日本中文一区二区三区| 亚洲综合免费观看高清在线观看| 欧美高清在线精品一区| 欧美成人a在线| 日韩欧美色综合网站| 欧美日韩免费在线视频| 色婷婷av一区二区三区软件| 成人av午夜电影| 国产成人午夜精品影院观看视频| 美女久久久精品| 日韩精品一区第一页| 亚洲免费观看高清完整| 18欧美亚洲精品| 国产精品黄色在线观看| 中文字幕高清不卡| 国产午夜精品在线观看| 久久久九九九九| 久久精品网站免费观看| 国产亚洲1区2区3区| 久久精品视频网| 国产天堂亚洲国产碰碰| 国产欧美一区二区精品性色| 国产视频亚洲色图| 中文字幕乱码久久午夜不卡| 欧美国产成人在线| 国产精品福利影院| 成人免费在线视频观看| 亚洲免费伊人电影| 亚洲综合精品久久| 天天做天天摸天天爽国产一区| 亚洲一区二区欧美日韩| 首页国产丝袜综合| 开心九九激情九九欧美日韩精美视频电影| 青青国产91久久久久久| 久久成人精品无人区| 国产在线播放一区| 成人av综合在线| 色八戒一区二区三区| 在线观看免费一区| 欧美一区二区福利视频| 精品国产一区二区三区忘忧草 | 一区二区在线观看av| 一区二区激情小说| 日韩影院免费视频| 韩国女主播成人在线| a在线欧美一区| 欧美在线观看18| 日韩免费福利电影在线观看| 亚洲国产精品高清| 亚洲国产日韩a在线播放| 蜜臀a∨国产成人精品| 高清在线观看日韩| 精品视频一区二区不卡| 2023国产精品视频| 亚洲免费观看高清| 另类小说一区二区三区| 99久久99久久综合| 欧美一区二区高清| 国产区在线观看成人精品| 一区二区三区在线高清| 国产综合久久久久久久久久久久| av资源站一区| 欧美一区二区三区喷汁尤物| 日本一区二区三区四区在线视频| 亚洲欧美日韩一区二区| 久久国产精品区| 99麻豆久久久国产精品免费优播| 欧美日韩电影在线| 国产精品日韩成人| 日韩av电影免费观看高清完整版 | 91麻豆精品在线观看| 678五月天丁香亚洲综合网| 久久久91精品国产一区二区三区| 日韩理论片一区二区| 极品美女销魂一区二区三区| 色婷婷久久久久swag精品| 久久免费美女视频| 调教+趴+乳夹+国产+精品| 99在线视频精品| 欧美变态tickle挠乳网站| 亚洲与欧洲av电影| 成人h动漫精品| 久久久影院官网| 蜜臀久久久久久久| 日本道色综合久久| 国产精品福利影院| 国产高清不卡二三区| 欧美大片在线观看一区二区| 亚洲精品高清在线观看| 粉嫩嫩av羞羞动漫久久久| 精品国产免费久久| 亚洲成a天堂v人片| 91福利资源站| 亚洲日穴在线视频| 成人av在线资源| 国产亚洲成av人在线观看导航| 美腿丝袜亚洲一区| 欧美日韩国产成人在线免费| 亚洲免费观看高清| 91网页版在线| 中文字幕综合网| 99久久99久久久精品齐齐| 国产欧美一区视频| 国产精品亚洲第一| 国产喂奶挤奶一区二区三区| 国产一区在线看| 久久久久高清精品| 国产福利一区二区三区在线视频| 欧美大片拔萝卜| 久久99精品久久久久| 日韩美一区二区三区| 精品在线视频一区| 精品嫩草影院久久| 国产精品一区专区| 国产欧美精品一区二区色综合 | 国产精品久久久一本精品| 高清不卡在线观看av| 国产日韩精品一区二区三区| 国产馆精品极品| 中文字幕一区二区不卡| 91蜜桃视频在线| 亚洲午夜免费电影| 日韩一区二区三区在线视频| 日韩激情一区二区| 欧美不卡一二三| 丁香激情综合五月| 亚洲欧美激情小说另类| 欧美日韩在线一区二区| 视频一区二区欧美| 欧美精品一区二区三区久久久 | 亚洲女子a中天字幕| 色诱亚洲精品久久久久久| 午夜精品久久久久久久99水蜜桃 | 国产嫩草影院久久久久| 成人18视频日本| 亚洲精品日日夜夜| 欧美丰满嫩嫩电影| 国模娜娜一区二区三区| 国产欧美精品一区二区色综合朱莉| 99精品视频一区| 午夜精品免费在线| 国产日产亚洲精品系列| 色视频成人在线观看免| 日韩高清一区在线| 亚洲国产激情av|