MapReduce编程开发之倒排索引

2023-05-16

    倒排索引是词频统计的一个变种,其实也是做一个词频统计,不过这个词频统计需要加上文件的名称。倒排索引被广泛用来做全文检索。倒排索引最终的结果是一个单词在文件中出现的次数的集合,以下面的数据为例:

file1.txt

hdfs hadoop mapreduce
hdfs bigdata
hadoop mapreduce

file2.txt

mapreduce hdfs
hadoop bigdata mapreduce
hdfs hadoop
hdfs mapreduce
bigdata hadoop

file3.txt

bigdata hadoop mapreduce hdfs
hadoop hdfs mapreduce
bigdata hadoop

最终的结果:

bigdata	file3.txt:2;file2.txt:2;file1.txt:1;
hadoop	file1.txt:2;file3.txt:3;file2.txt:3;
hdfs	file2.txt:3;file1.txt:2;file3.txt:2;
mapreduce	file3.txt:2;file1.txt:2;file2.txt:3;

倒排索引的设计思路和词频统计有些类似,但是不完全一样,这里我们需要对单词的文件也加进来做统计。要达到这个效果,首先是需要对单词以及他所出现的文件做一个统计,所以map中,我们和做词频统计一样,只不过输出的词不是单个的词,而是词+":"+文件,类似这样:

<hdfs:file1.txt , 1>
<hdfs:file2.txt , 1>
<hdfs:file3.txt , 1>
<hadoop:file1.txt , 1>
<mapreduce:file3.txt , 1>

在reduce之前,我们需要进行一个combine的过程,这个过程里面,我们的输入数据是这样的<词:文件 , <统计次数>>:

<hdfs:file1.txt , <1,1>>
<hadoop:file1.txt , <1,1>>
<hdfs:file2.txt , <1,1,1>>
<hdfs:file3.txt , <1,1>>

我们做个改动,输出为<词,文件:统计次数和>

<hdfs , file1.txt:2>
<hdfs , file2.txt:3>
<hdfs , file3.txt:2>

在reduce阶段,我们就将values拼接即可。用一张图来总结这个过程就是:

下面给出倒排索引的代码:

package com.xxx.hadoop.mapred;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
/**
 * 倒排索引
 *
 */
public class InvertIndexApp {

	public static class Map extends Mapper<LongWritable, Text, Text, Text>{
		private Text keyinfo = new Text();
		private Text valueinfo = new Text();
		private FileSplit split;
		protected void map(LongWritable key, Text value,
				Context context) throws IOException ,InterruptedException {
			split = (FileSplit)context.getInputSplit();
			StringTokenizer tokenizer = new StringTokenizer(value.toString());
			while(tokenizer.hasMoreTokens()) {
				int splitIndex = split.getPath().toString().indexOf("file");
				keyinfo.set(tokenizer.nextToken()+":"+split.getPath().toString().substring(splitIndex));
				valueinfo.set("1");
				context.write(keyinfo, valueinfo);
			}
		}
	}
	
	
	public static class Combine extends Reducer<Text, Text, Text, Text>{
		private Text info = new Text();
		protected void reduce(Text key, java.lang.Iterable<Text> values, 
				Context context) throws IOException ,InterruptedException {
			int sum = 0;
			for(Text value:values) {
				sum += Integer.parseInt(value.toString());
			}
			int splitIndex = key.toString().indexOf(":");
			info.set(key.toString().substring(splitIndex+1)+":"+sum);
			key.set(key.toString().substring(0,splitIndex));
			context.write(key, info);
		}
	}
	
	public static class Reduce extends Reducer<Text, Text, Text, Text>{
		private Text result = new Text();
		protected void reduce(Text key, Iterable<Text> values,
				Context context) throws java.io.IOException ,InterruptedException {	
			String filelist = new String();
			for(Text value:values) {
				filelist += value.toString()+";";
			}
			result.set(filelist);
			context.write(key, result);
		};
	}
	public static void main(String[] args) throws Exception{
		String input = "/user/root/invertindex/input";
		String output = "/user/root/invertindex/output";
		System.setProperty("HADOOP_USER_NAME", "root");
		Configuration conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.56.202:9000");
		FileSystem fs = FileSystem.get(conf);
		boolean exists = fs.exists(new Path(output));
		if(exists) {
			fs.delete(new Path(output), true);
		}
		Job job = Job.getInstance(conf);
		job.setJarByClass(InvertIndexApp.class);
		
		job.setMapperClass(Map.class);
		job.setCombinerClass(Combine.class);
		job.setReducerClass(Reduce.class);
		
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(Text.class);
		
		FileInputFormat.addInputPath(job, new Path(input));
		FileOutputFormat.setOutputPath(job, new Path(output));
		System.exit(job.waitForCompletion(true)?0:1);
	}

}

运行之前的准备数据:

运行程序,控制台打印信息如下:

2019-09-02 09:56:22 [INFO ]  [main]  [org.apache.hadoop.conf.Configuration.deprecation] session.id is deprecated. Instead, use dfs.metrics.session-id
2019-09-02 09:56:22 [INFO ]  [main]  [org.apache.hadoop.metrics.jvm.JvmMetrics] Initializing JVM Metrics with processName=JobTracker, sessionId=
2019-09-02 09:56:22 [WARN ]  [main]  [org.apache.hadoop.mapreduce.JobResourceUploader] Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-09-02 09:56:22 [WARN ]  [main]  [org.apache.hadoop.mapreduce.JobResourceUploader] No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2019-09-02 09:56:22 [INFO ]  [main]  [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] Total input paths to process : 3
2019-09-02 09:56:22 [INFO ]  [main]  [org.apache.hadoop.mapreduce.JobSubmitter] number of splits:3
2019-09-02 09:56:22 [INFO ]  [main]  [org.apache.hadoop.mapreduce.JobSubmitter] Submitting tokens for job: job_local1888565320_0001
2019-09-02 09:56:23 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] The url to track the job: http://localhost:8080/
2019-09-02 09:56:23 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Running job: job_local1888565320_0001
2019-09-02 09:56:23 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] OutputCommitter set in config null
2019-09-02 09:56:23 [INFO ]  [Thread-3]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-09-02 09:56:23 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2019-09-02 09:56:23 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] Waiting for map tasks
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1888565320_0001_m_000000_0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@e65aa68
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/invertindex/input/file2.txt:0+82
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 214; bufvoid = 104857600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1888565320_0001_m_000000_0 is done. And is in the process of committing
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1888565320_0001_m_000000_0' done.
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1888565320_0001_m_000000_0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1888565320_0001_m_000001_0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@4012ecfc
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/invertindex/input/file3.txt:0+67
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 175; bufvoid = 104857600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214364(104857456); length = 33/6553600
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1888565320_0001_m_000001_0 is done. And is in the process of committing
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1888565320_0001_m_000001_0' done.
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1888565320_0001_m_000001_0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1888565320_0001_m_000002_0
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@157add31
2019-09-02 09:56:23 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/invertindex/input/file1.txt:0+52
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 136; bufvoid = 104857600
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1888565320_0001_m_000002_0 is done. And is in the process of committing
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1888565320_0001_m_000002_0' done.
2019-09-02 09:56:24 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1888565320_0001_m_000002_0
2019-09-02 09:56:24 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] map task executor complete.
2019-09-02 09:56:24 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Job job_local1888565320_0001 running in uber mode : false
2019-09-02 09:56:24 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job]  map 100% reduce 0%
2019-09-02 09:56:24 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] Waiting for reduce tasks
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1888565320_0001_r_000000_0
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@27512f1a
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.ReduceTask] Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4ac60620
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] MergerManager: memoryLimit=1265788544, maxSingleShuffleLimit=316447136, mergeThreshold=835420480, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2019-09-02 09:56:24 [INFO ]  [EventFetcher for fetching Map Completion Events]  [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] attempt_local1888565320_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1888565320_0001_m_000001_0 decomp: 88 len: 92 to MEMORY
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 88 bytes from map-output for attempt_local1888565320_0001_m_000001_0
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 88, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->88
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1888565320_0001_m_000002_0 decomp: 88 len: 92 to MEMORY
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 88 bytes from map-output for attempt_local1888565320_0001_m_000002_0
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 88, inMemoryMapOutputs.size() -> 2, commitMemory -> 88, usedMemory ->176
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1888565320_0001_m_000000_0 decomp: 88 len: 92 to MEMORY
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 88 bytes from map-output for attempt_local1888565320_0001_m_000000_0
2019-09-02 09:56:24 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 88, inMemoryMapOutputs.size() -> 3, commitMemory -> 176, usedMemory ->264
2019-09-02 09:56:24 [INFO ]  [EventFetcher for fetching Map Completion Events]  [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] EventFetcher is interrupted.. Returning
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Merging 3 sorted segments
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Down to the last merge-pass, with 3 segments left of total size: 234 bytes
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merged 3 segments, 264 bytes to disk to satisfy reduce memory limit
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merging 1 files, 264 bytes from disk
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merging 0 segments, 0 bytes from memory into reduce
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Merging 1 sorted segments
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Down to the last merge-pass, with 1 segments left of total size: 250 bytes
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.conf.Configuration.deprecation] mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task:attempt_local1888565320_0001_r_000000_0 is done. And is in the process of committing
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task attempt_local1888565320_0001_r_000000_0 is allowed to commit now
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] Saved output of task 'attempt_local1888565320_0001_r_000000_0' to hdfs://192.168.56.202:9000/user/root/invertindex/output/_temporary/0/task_local1888565320_0001_r_000000
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] reduce > reduce
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1888565320_0001_r_000000_0' done.
2019-09-02 09:56:24 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1888565320_0001_r_000000_0
2019-09-02 09:56:24 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] reduce task executor complete.
2019-09-02 09:56:25 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job]  map 100% reduce 100%
2019-09-02 09:56:25 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Job job_local1888565320_0001 completed successfully
2019-09-02 09:56:25 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Counters: 35
	File System Counters
		FILE: Number of bytes read=4510
		FILE: Number of bytes written=1098080
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=633
		HDFS: Number of bytes written=178
		HDFS: Number of read operations=37
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=10
	Map-Reduce Framework
		Map input records=11
		Map output records=27
		Map output bytes=525
		Map output materialized bytes=276
		Input split bytes=387
		Combine input records=27
		Combine output records=12
		Reduce input groups=4
		Reduce shuffle bytes=276
		Reduce input records=12
		Reduce output records=4
		Spilled Records=24
		Shuffled Maps =3
		Failed Shuffles=0
		Merged Map outputs=3
		GC time elapsed (ms)=8
		Total committed heap usage (bytes)=1485307904
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=201
	File Output Format Counters 
		Bytes Written=178

查看运行结果:

    这个和普通的MapReduce程序不太一样的地方是,我们在map,reduce之间显示的方式指定了一个combine的过程,这个过程里面,我们修改了<key,value>,让他在reduce阶段更简单。其实普通的MapReduce程序也有combine的过程,只不过这个combine别我们忽略了,因为它的默认实现就是reduce的实现。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

MapReduce编程开发之倒排索引 的相关文章

  • ubuntu中安装apache ab命令进行简单压力测试

    1 安裝ab命令 sudo apt get install apache2 utils 2 ab命令参数说明 Usage ab options http s hostname port path Options are 总的请求数 n re
  • 如何查看当前Apache的连接数

    查看了连接数和当前的连接数 netstat ant grep ip 80 wc l netstat ant grep ip 80 grep EST wc l 查看IP访问次数 netstat nat grep 34 80 34 awk 39
  • php 获取页面中的指定内容类

    功能 xff1a 1 获取内容中的url xff0c email xff0c image 2 替换内容中的url xff0c email xff0c image url xff1a lt a href 61 34 url 34 gt xxx
  • memcached启动参数

    memcached启动参数 p 指定端口号 xff08 默认11211 xff09 U lt num gt UDP监听端口 默认 11211 0 时关闭 s lt file gt 用于监听的UNIX套接字路径 xff08 禁用网络支持 xf
  • mysql常用方法

    1 CONCAT str1 str2 mysql gt SELECT CONCAT 39 My 39 39 S 39 39 QL 39 gt 39 MySQL 39 mysql gt SELECT CONCAT 39 My 39 NULL
  • shell 监控网站是否异常的脚本

    shell 监控网站是否异常的脚本 xff0c 如有异常自动发电邮通知管理员 流程 xff1a 1 检查网站返回的http code 是否等于200 xff0c 如不是200视为异常 2 检查网站的访问时间 xff0c 超过MAXLOADT
  • 文件转base64输出

    Data URI scheme是在RFC2397中定义的 xff0c 目的是将一些小的数据 xff0c 直接嵌入到网页中 xff0c 从而不用再从外部文件载入 优点 xff1a 减少http连接数 缺点 xff1a 这种格式的文件不会被浏览
  • php 支持断点续传的文件下载类

    php 支持断点续传 xff0c 主要依靠HTTP协议中 header HTTP RANGE实现 HTTP断点续传原理 Http头 Range Content Range HTTP头中一般断点下载时才用到Range和Content Rang
  • 基于Linkit 7697的红绿灯控制系统

    1 硬件准备 LinkIt 7697 1 xff0c 继电器模块 1 xff0c 面包板 1 xff0c RGB LED灯 1 xff08 共阳极 xff0c 工作电流20mA xff0c 红灯压降2 2 2V xff0c 绿灯蓝灯压降3
  • shell 记录apache status并自动更新到数据库

    1 获取apache status monitor log sh bin bash 连接数 site connects 61 netstat ant grep ip 80 wc l 当前连接数 site cur connects 61 ne
  • php 缩略图生成类,支持imagemagick及gd库两种处理

    功能 1 按比例缩小 放大 2 填充背景色 3 按区域裁剪 4 添加水印 包括水印的位置 透明度等 使用imagemagick GD库实现 imagemagick地址 www imagemagick org 需要安装imagemagick
  • php5.3 中显示Deprecated: Assigning the return value of new by reference is deprecated in 的解决方法

    今天需要将某个网站般去另一台服务器 设置好运行 xff0c 显示一大堆Deprecated Deprecated Assigning the return value of new by reference is deprecated in
  • 使用apache mod_env模块保存php程序敏感信息

    Apache模块 mod env 说明 xff1a 允许Apache修改或清除传送到CGI脚本和SSI页面的环境变量 模块名 xff1a env module 源文件 xff1a mod env c 本模块用于控制传送给CGI脚本和SSI页
  • php 根据url自动生成缩略图

    原理 xff1a 设置apache rewrite xff0c 当图片不存在时 xff0c 调用php创建图片 例如 原图路径为 xff1a http localhost upload news 2013 07 21 1 jpg 缩略图路径
  • mailto 参数说明

    mailto 可以调用系统内置软件发送电子邮件 参数说明 mailto xff1a 收件人地址 xff0c 可多个 xff0c 用 分隔 cc xff1a 抄送人地址 xff0c 可多个 xff0c 用 分隔 bcc xff1a 密件抄送人
  • mysql 导入导出数据库

    mysql 导入导出数据库 1 导出数据 导出test 数据库 R 表示导出函数和存储过程 xff0c 加上使导出更完整 mysqldump u root p R test gt test sql 导出test数据库中user表 mysql
  • php 广告加载类

    php 广告加载类 xff0c 支持异步与同步加载 需要使用Jquery ADLoader class php lt php 广告加载管理类 Date 2013 08 04 Author fdipzone Ver 1 0 Func publ
  • 使用<img>标签加载php文件,记录页面访问讯息

    原理 xff1a 通过 lt img gt 标标签加载php文件 xff0c php文件会使用gd库生成一张1x1px的空白透明图片返回 xff0c 并记录传递的参数写入log文件 lt img src 61 34 sitestat php
  • tput 命令行使用说明

    什么是 tput xff1f tput 命令将通过 terminfo 数据库对您的终端会话进行初始化和操作 通过使用 tput xff0c 您可以更改几项终端功能 xff0c 如移动或更改光标 更改文本属性 xff0c 以及清除终端屏幕的特
  • ROS2学习笔记(二)-- 多机通讯原理简介及配置方法

    在ROS1中由主节点 master 负责其它从节点的通信 xff0c 在同一局域网内通过设置主节点地址也可以实现多机通讯 xff0c 但是这种多机通讯网络存在一个严重的问题 xff0c 那就是所有从节点强依赖于主节点 xff0c 一旦运行主

随机推荐

  • 使用shell实现阿里云动态DNS

    https github com timwai aliyunDDNS shell 脚本全部使用基础的命令实现 xff0c 支持在openwrt中使用 修改以下参数为你自己的参数 ACCESS KEY ID 61 你的AccessKeyId
  • Java-两个较大的List快速取交集、差集

    工作中经常遇到需要取两个集合之间的交集 差集情况 xff0c 但是普通的retainAll 和removeAll 无法满足数据量大的情况 xff0c 由此就自己尝试运用其他的方法解决 注 xff1a 如果数据量小的情况下 xff0c 还是使
  • Xubuntu15.04更新系统源时出现错误提示W: GPG 错误:http://archive.ubuntukylin.com:10006 xenial InRelease: 由于没有公钥,无法验证

    在更新系统源后 xff0c 输入sudo apt get update之后出现提示 xff1a W GPG 错误 xff1a http archive ubuntukylin com 10006 xenial InRelease 由于没有公
  • ubuntu开启SSH服务远程登录

    ssh secure shell xff0c 提供安全的远程登录 从事嵌入式开发搭建linux开发环境中 xff0c ssh的服务的安装是其中必不可少的一步 ssh方便一个开发小组中人员登录一台服务器 xff0c 从事代码的编写 编译 运行
  • Python实现让视频自动打码,再也不怕出现少儿不宜的画面了

    人生苦短 我用Python 序言准备工作代码解析完整代码 序言 我们在观看视频的时候 xff0c 有时候会出现一些奇怪的马赛克 xff0c 影响我们的观影体验 xff0c 那么这些马赛克是如何精确的加上去的呢 xff1f 本次我们就来用Py
  • Docker安装nextcloud实验

    Docker安装nextcloud实验 修改验证方式 xff1a 从密钥到密码 sudo passwd root su root vi etc ssh sshd config 去掉下面前的 或修改yes no port 22 Address
  • Tesseract-OCR 字符识别---样本训练

    Tesseract是一个开源的OCR xff08 Optical Character Recognition xff0c 光学字符识别 xff09 引擎 xff0c 可以识别多种格式的图像文件并将其转换成文本 xff0c 目前已支持60多种
  • FPGA与OPENCV的联合仿真

    对于初学者来说 xff0c 图像处理行业 xff0c 最佳仿真方式 xff1a FPGA 43 OPENCV xff0c 因为OPENCV适合商业化 xff0c 适合自己写算法 1 xff09 中间交互数据介质 txt文档 2 xff09
  • 华硕P8Z77-V LX老主板转换卡升级NVMe M2硬盘经验,老主机的福音,质的飞跃

    每年双十一都是淘货升级老家伙的时候 xff0c 今年也不例外 xff0c 随着日子长久 xff0c 软件的增多 xff0c 虽然已经尽量装在系统盘以外的盘 xff0c 但C盘还是日渐不够用 xff0c 从以前的30G系统盘升到60G xff
  • linux 更换 软件源后 GPG错误

    linux 更换 软件源后 GPG错误 linux 软件源 GPG 签名 密钥 linux 更换 软件源后 GPG错误 http my oschina net emptytimespace blog 83633 如文章 1 中提到 xff1
  • ROS2学习笔记(四)-- 用方向键控制小车行走

    简介 xff1a 在上一节的内容中 xff0c 我们通过ROS2的话题发布功能将小车实时视频信息发布了出来 xff0c 同时使用GUI工具进行查看 xff0c 在这一节内容中 xff0c 我们学习一下如何订阅话题并处理话题消息 xff0c
  • flume大数据框架数据采集系统

    flume是cloudera开源的数据采集系统 xff0c 现在是apache基金会下的子项目 xff0c 他是hadoop生态系统的日志采集系统 xff0c 用途广泛 xff0c 可以将日志 网络数据 kafka消息收集并存储在大数据hd
  • flume日志收集系统常见配置

    前面介绍了flume入门实例 xff0c 介绍了配置netcat信源 xff0c 以及memory信道 xff0c logger信宿 xff0c 其实flume常见的信源信道信宿有很多 xff0c 这里介绍flume常用信源的三种方式 xf
  • flume自定义拦截器实现定制收集日志需求

    flume默认提供了timestamp host static regex等几种类型的拦截器 xff0c timestamp host static等拦截器 xff0c 其实就是在消息头中增加了时间戳 xff0c 主机名 xff0c 键值对
  • Eclipse开发mapreduce程序环境搭建

    Eclipse作为一个常用的java IDE xff0c 其使用程度虽然比不上idea那么强大 xff0c 但是对于习惯使用eclipse开发的人来说 xff0c 也不失为一个可以选择的IDE 对于喜欢eclipse开发的人来说 xff0c
  • hdfs常见操作java示例

    我们学习hadoop xff0c 最常见的编程是编写mapreduce程序 xff0c 但是 xff0c 有时候我们也会利用java程序做一些常见的hdfs操作 比如删除一个目录 xff0c 新建一个文件 xff0c 从本地上传一个文件到h
  • MapReduce编程开发之数据去重

    MapReduce就是一个利用分而治之的思想做计算的框架 xff0c 所谓分 xff0c 就是将数据打散 xff0c 分成可以计算的小份 xff0c 治就是将数据合并 xff0c 相同键的数据合并成一个集合 MapReduce并不能解决所有
  • MapReduce编程开发之求平均成绩

    MapReduce计算平均成绩是一个常见的算法 xff0c 本省思路很简单 xff0c 就是将每个人的成绩汇总 xff0c 然后做除法 xff0c 在map阶段 xff0c 是直接将姓名做key 分数作为value输出 在shuffle阶段
  • MapReduce编程开发之数据排序

    MapReduce的数据排序 xff0c 其实没有很复杂的实现 xff0c 默认在shuffle阶段 xff0c MapReduce就帮我们将数据排好序了 xff0c 我们在Map和Reduce阶段 xff0c 无需做额外的操作 MapRe
  • MapReduce编程开发之倒排索引

    倒排索引是词频统计的一个变种 xff0c 其实也是做一个词频统计 xff0c 不过这个词频统计需要加上文件的名称 倒排索引被广泛用来做全文检索 倒排索引最终的结果是一个单词在文件中出现的次数的集合 xff0c 以下面的数据为例 xff1a