亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? clustertestdfs.java

?? Hadoop是一個(gè)用于運(yùn)行應(yīng)用程序在大型集群的廉價(jià)硬件設(shè)備上的框架。Hadoop為應(yīng)用程序透明的提供了一組穩(wěn)定/可靠的接口和數(shù)據(jù)運(yùn)動(dòng)。在 Hadoop中實(shí)現(xiàn)了Google的MapReduce算法
?? JAVA
?? 第 1 頁 / 共 2 頁
字號(hào):
/** * Copyright 2005 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */package org.apache.hadoop.dfs;import junit.framework.TestCase;import junit.framework.AssertionFailedError;import org.apache.hadoop.fs.FSInputStream;import org.apache.hadoop.fs.FSOutputStream;import org.apache.hadoop.fs.FileUtil;import org.apache.hadoop.io.UTF8;import org.apache.hadoop.util.LogFormatter;import org.apache.hadoop.conf.Configuration;import java.io.File;import java.io.FilenameFilter;import java.net.InetSocketAddress;import java.util.ArrayList;import java.util.ListIterator;import java.util.logging.Logger;import java.util.Random;import java.lang.reflect.Constructor;import java.lang.reflect.InvocationTargetException;/** * Test DFS. * ClusterTestDFS is a JUnit test for DFS using "pseudo multiprocessing" (or  more strictly, pseudo distributed) meaning all daemons run in one process  and sockets are used to communicate between daemons.  The test permutes * various block sizes, number of files, file sizes, and number of * datanodes.  After creating 1 or more files and filling them with random * data, one datanode is shutdown, and then the files are verfified. * Next, all the random test files are deleted and we test for leakage * (non-deletion) by directly checking the real directories corresponding * to the datanodes still running. * <p> * Usage notes: TEST_PERMUTATION_MAX can be adjusted to perform more or * less testing of permutations.  The ceiling of useful permutation is * TEST_PERMUTATION_MAX_CEILING. * <p> * DFSClient emits many messages that can be ignored like: * "Failed to connect to *:7000:java.net.ConnectException: Connection refused: connect" * because a datanode is forced to close during testing. * <p> * Warnings about "Zero targets found" can be ignored (these are naggingly * emitted even though it is not possible to achieve the desired replication * level with the number of active datanodes.) * <p> * Possible Extensions: * <p>Bring a datanode down and restart it to verify reconnection to namenode. * <p>Simulate running out of disk space on one datanode only. * <p>Bring the namenode down and restart it to verify that datanodes reconnect. * <p> * <p>For a another approach to filesystem testing, see the high level * (HadoopFS level) test {@link org.apache.hadoop.fs.TestFileSystem}. * @author Paul Baclace */public class ClusterTestDFS extends TestCase implements FSConstants {  private static final Logger LOG =      LogFormatter.getLogger("org.apache.hadoop.dfs.ClusterTestDFS");  private static Configuration conf = new Configuration();  private static int BUFFER_SIZE =      conf.getInt("io.file.buffer.size", 4096);  private static int testCycleNumber = 0;  /**   * all DFS test files go under this base directory   */  private static String baseDirSpecified;  /**   * base dir as File   */  private static File baseDir;  /** DFS block sizes to permute over in multiple test cycles   * (array length should be prime).   */  private static final int[] BLOCK_SIZES = {100000, 4096};  /** DFS file sizes to permute over in multiple test cycles   * (array length should be prime).   */  private static final int[] FILE_SIZES =      {100000, 100001, 4095, 4096, 4097, 1000000, 1000001};  /** DFS file counts to permute over in multiple test cycles   * (array length should be prime).   */  private static final int[] FILE_COUNTS = {1, 10, 100};  /** Number of useful permutations or test cycles.   * (The 2 factor represents the alternating 2 or 3 number of datanodes   * started.)   */  private static final int TEST_PERMUTATION_MAX_CEILING =    BLOCK_SIZES.length * FILE_SIZES.length * FILE_COUNTS.length * 2;  /** Number of permutations of DFS test parameters to perform.   * If this is greater than ceiling TEST_PERMUTATION_MAX_CEILING, then the   * ceiling value is used.   */  private static final int TEST_PERMUTATION_MAX = 3;  private Constructor randomDataGeneratorCtor = null;  static {    baseDirSpecified = System.getProperty("test.dfs.data", "/tmp/dfs_test");    baseDir = new File(baseDirSpecified);  }  protected void setUp() throws Exception {    super.setUp();    conf.setBoolean("test.dfs.same.host.targets.allowed", true);  } /**  * Remove old files from temp area used by this test case and be sure  * base temp directory can be created.  */  protected void prepareTempFileSpace() {    if (baseDir.exists()) {      try { // start from a blank slate        FileUtil.fullyDelete(baseDir, conf);      } catch (Exception ignored) {      }    }    baseDir.mkdirs();    if (!baseDir.isDirectory()) {      throw new RuntimeException("Value of root directory property test.dfs.data for dfs test is not a directory: "          + baseDirSpecified);    }  }  /**   * Pseudo Distributed FS Test.   * Test DFS by running all the necessary daemons in one process.   * Test various block sizes, number of files, disk space consumption,   * and leakage.   *   * @throws Exception   */  public void testFsPseudoDistributed()      throws Exception {    while (testCycleNumber < TEST_PERMUTATION_MAX &&        testCycleNumber < TEST_PERMUTATION_MAX_CEILING) {        int blockSize = BLOCK_SIZES[testCycleNumber % BLOCK_SIZES.length];        int numFiles = FILE_COUNTS[testCycleNumber % FILE_COUNTS.length];        int fileSize = FILE_SIZES[testCycleNumber % FILE_SIZES.length];        prepareTempFileSpace();        testFsPseudoDistributed(fileSize, numFiles, blockSize,            (testCycleNumber % 2) + 2);    }  }  /**   * Pseudo Distributed FS Testing.   * Do one test cycle with given parameters.   *   * @param nBytes         number of bytes to write to each file.   * @param numFiles       number of files to create.   * @param blockSize      block size to use for this test cycle.   * @param initialDNcount number of datanodes to create   * @throws Exception   */  public void testFsPseudoDistributed(long nBytes, int numFiles,                                      int blockSize, int initialDNcount)      throws Exception {    long startTime = System.currentTimeMillis();    int bufferSize = Math.min(BUFFER_SIZE, blockSize);    boolean checkDataDirsEmpty = false;    int iDatanodeClosed = 0;    Random randomDataGenerator = makeRandomDataGenerator();    final int currentTestCycleNumber = testCycleNumber;    msg("using randomDataGenerator=" + randomDataGenerator.getClass().getName());    //    //     modify config for test    //    // set given config param to override other config settings    conf.setInt("test.dfs.block_size", blockSize);    // verify that config changed    assertTrue(blockSize == conf.getInt("test.dfs.block_size", 2)); // 2 is an intentional obviously-wrong block size    // downsize for testing (just to save resources)    conf.setInt("dfs.namenode.handler.count", 3);    if (false) { //  use MersenneTwister, if present      conf.set("hadoop.random.class",                          "org.apache.hadoop.util.MersenneTwister");    }    conf.setLong("dfs.blockreport.intervalMsec", 50*1000L);    conf.setLong("dfs.datanode.startupMsec", 15*1000L);    String nameFSDir = baseDirSpecified + "/name";    msg("----Start Test Cycle=" + currentTestCycleNumber +        " test.dfs.block_size=" + blockSize +        " nBytes=" + nBytes +        " numFiles=" + numFiles +        " initialDNcount=" + initialDNcount);    //    //          start a NameNode    int nameNodePort = 9000 + testCycleNumber++; // ToDo: settable base port    String nameNodeSocketAddr = "localhost:" + nameNodePort;    NameNode nameNodeDaemon = new NameNode(new File(nameFSDir), nameNodePort, conf);    DFSClient dfsClient = null;    try {      //      //        start some DataNodes      //      ArrayList listOfDataNodeDaemons = new ArrayList();      conf.set("fs.default.name", nameNodeSocketAddr);      for (int i = 0; i < initialDNcount; i++) {        // uniquely config real fs path for data storage for this datanode        String dataDir = baseDirSpecified + "/datanode" + i;        conf.set("dfs.data.dir", dataDir);        DataNode dn = DataNode.makeInstanceForDir(dataDir, conf);        if (dn != null) {          listOfDataNodeDaemons.add(dn);          (new Thread(dn, "DataNode" + i + ": " + dataDir)).start();        }      }      try {        assertTrue("insufficient datanodes for test to continue",            (listOfDataNodeDaemons.size() >= 2));        //        //          wait for datanodes to report in        awaitQuiescence();        //  act as if namenode is a remote process        dfsClient = new DFSClient(new InetSocketAddress("localhost", nameNodePort), conf);        //        //           write nBytes of data using randomDataGenerator to numFiles        //        ArrayList testfilesList = new ArrayList();        byte[] buffer = new byte[bufferSize];        UTF8 testFileName = null;        for (int iFileNumber = 0; iFileNumber < numFiles; iFileNumber++) {          testFileName = new UTF8("/f" + iFileNumber);          testfilesList.add(testFileName);          FSOutputStream nos = dfsClient.create(testFileName, false);          try {            for (long nBytesWritten = 0L;                 nBytesWritten < nBytes;                 nBytesWritten += buffer.length) {              if ((nBytesWritten + buffer.length) > nBytes) {                // calculate byte count needed to exactly hit nBytes in length                //  to keep randomDataGenerator in sync during the verify step                int pb = (int) (nBytes - nBytesWritten);

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
成人高清av在线| 国内久久精品视频| 日韩欧美成人一区| fc2成人免费人成在线观看播放| 人人狠狠综合久久亚洲| 亚洲成人福利片| 亚洲综合在线第一页| 国产精品看片你懂得| 欧美一区二区三区四区在线观看 | av成人免费在线观看| 麻豆精品一区二区三区| 亚洲欧美日韩综合aⅴ视频| 欧美激情一区二区三区| 精品久久一区二区三区| 久久久美女毛片| 久久精品在线免费观看| 国产精品看片你懂得| 自拍偷拍国产精品| 亚洲午夜羞羞片| 日本不卡一二三区黄网| 精品无码三级在线观看视频| 久久精品国产第一区二区三区| 激情久久五月天| 成人午夜在线视频| 日本道色综合久久| 欧美三级三级三级爽爽爽| 91麻豆精品国产91久久久资源速度| 日韩区在线观看| 精品国产免费视频| 国产精品欧美一级免费| 亚洲欧洲精品天堂一级| 亚洲成av人综合在线观看| 蜜桃av噜噜一区| 懂色av一区二区三区免费观看| 91网址在线看| 91精品免费在线| 国产日韩高清在线| 亚洲精品va在线观看| 美国十次综合导航| 99在线热播精品免费| 欧美日本在线一区| 久久久久久久电影| 一区二区三区美女| 国产一区二区久久| 91在线观看美女| 日韩欧美激情在线| 亚洲色图制服诱惑| 精品一区二区三区香蕉蜜桃| 成人高清免费观看| 欧美一级欧美一级在线播放| 国产欧美精品一区二区色综合 | 精品成人一区二区| 亚洲人成网站色在线观看| 丝瓜av网站精品一区二区| 国产成人免费视频网站| 在线观看欧美精品| 久久久国产一区二区三区四区小说| 亚洲天堂2014| 精品亚洲免费视频| 在线观看免费成人| 国产午夜精品久久| 日韩高清电影一区| 99久久伊人网影院| 欧美不卡激情三级在线观看| 国产片一区二区| 日韩黄色小视频| 国产尤物一区二区在线| 欧美中文一区二区三区| 久久天天做天天爱综合色| 一区二区三区精品视频| 黄页网站大全一区二区| 欧美综合欧美视频| 久久久久久久久久久久久女国产乱| 亚洲日本乱码在线观看| 韩国v欧美v日本v亚洲v| 91久久香蕉国产日韩欧美9色| 777午夜精品视频在线播放| 亚洲欧洲韩国日本视频| 奇米888四色在线精品| 色88888久久久久久影院按摩| 欧美一区二区人人喊爽| 夜夜嗨av一区二区三区网页 | 亚洲人成在线观看一区二区| 蜜臀va亚洲va欧美va天堂| 久久精品国产亚洲a| 欧美综合色免费| 亚洲女同女同女同女同女同69| 六月丁香婷婷色狠狠久久| 欧美高清一级片在线| 国产亚洲婷婷免费| 国产在线精品免费| 欧美一区二区三区在| 性感美女极品91精品| 99精品一区二区三区| 久久老女人爱爱| 丝袜诱惑制服诱惑色一区在线观看| 日本韩国欧美在线| 亚洲国产精品黑人久久久| 国产伦精一区二区三区| 欧美日本精品一区二区三区| 亚洲精品第一国产综合野| 国产精品资源在线观看| 精品久久久久久亚洲综合网| 视频一区国产视频| 欧美日韩国产综合久久| 亚洲人123区| 色就色 综合激情| 亚洲丝袜自拍清纯另类| 91亚洲精品乱码久久久久久蜜桃| 国产欧美精品一区二区色综合| 国产一区二区不卡在线| 精品99999| 国产福利一区二区三区在线视频| 精品日韩99亚洲| 国产乱一区二区| 久久久久久99久久久精品网站| 国产一区二区导航在线播放| 2欧美一区二区三区在线观看视频| 麻豆91在线看| 精品久久久久久久久久久久包黑料 | 亚洲美女精品一区| 91久久线看在观草草青青| 亚洲美腿欧美偷拍| 欧美日本视频在线| 美女久久久精品| 欧美zozozo| 韩国av一区二区| 亚洲天堂av老司机| 91久久精品日日躁夜夜躁欧美| 亚洲电影激情视频网站| 欧美日韩精品免费观看视频 | 成人综合婷婷国产精品久久 | 亚洲激情图片一区| 91麻豆精品国产综合久久久久久| 麻豆精品视频在线| 国产精品久久久久7777按摩| jiyouzz国产精品久久| 性感美女久久精品| 日韩精品最新网址| 不卡一卡二卡三乱码免费网站 | 国产成都精品91一区二区三| 亚洲国产高清在线| 欧美日韩精品高清| 日韩电影在线观看一区| 国产区在线观看成人精品 | 欧美日韩不卡一区| 国产一区欧美一区| 亚洲品质自拍视频| 日韩欧美综合在线| 97久久久精品综合88久久| 亚洲成人激情av| 中文文精品字幕一区二区| 91蜜桃在线观看| 国产在线精品一区二区| 亚洲欧美国产77777| 精品国产精品一区二区夜夜嗨| 成人免费av在线| 日韩激情一二三区| 久久精品欧美一区二区三区麻豆| 99久久婷婷国产精品综合| 午夜久久电影网| 国产精品久久久久一区二区三区共| 色婷婷综合久久久久中文一区二区 | 99精品视频在线观看免费| 国产精品你懂的| 日韩免费看的电影| 99re这里都是精品| 捆绑调教美女网站视频一区| 国产精品电影一区二区| 欧美一区二区三区四区视频| 国产成人8x视频一区二区| 亚洲少妇最新在线视频| 91精品国产91热久久久做人人| 国产夫妻精品视频| 亚洲成人黄色影院| 国产精品日产欧美久久久久| 欧美精品久久99| 99久久夜色精品国产网站| 裸体健美xxxx欧美裸体表演| 亚洲色图视频免费播放| 欧美男人的天堂一二区| 成人免费va视频| 久久精工是国产品牌吗| 亚洲国产乱码最新视频 | 欧美系列亚洲系列| 精久久久久久久久久久| 亚洲va在线va天堂| 亚洲欧美一区二区视频| 欧美一区二区三级| 日本二三区不卡| 成人综合婷婷国产精品久久蜜臀| 免费成人在线播放| 一区二区三区日韩欧美精品| 亚洲国产精品v| 精品久久久久久久久久久久包黑料| 欧美天堂一区二区三区| 丰满少妇久久久久久久| 紧缚奴在线一区二区三区| 亚洲国产成人av| 亚洲免费av高清|