MapReduce编程开发之数据去重

2023-05-16

    MapReduce就是一个利用分而治之的思想做计算的框架,所谓分,就是将数据打散,分成可以计算的小份,治就是将数据合并,相同键的数据合并成一个集合。MapReduce并不能解决所有的问题,因为他的数据类型是键值对,只能解决特定范围的问题。

    数据去重算法,其实就是词频统计的一个变种,词频统计是统计文本中的单词出现的次数,如果一个单词出现两次,就算重复,去掉重复的办法就是保留一个副本,键值对表示的话,我们只需要保留键的值就可以了。其实MapReduce最适合干的工作就是数据去重,因为在reduce阶段,所有的数据的键都是唯一的,正好满足数据去重的要求。

    在map阶段,我们将value作为key输出,将new Text()作为value就是空输出,这样我们就把所有的数据遍历了一遍,这里面,我们几乎不用什么复杂的算法。

    在shuffle阶段,这些值相同的key会被合并到一块,value-list是一个new Text()的集合,如果有重复的记录,那么这个集合的长度会大于1,而且这些key还会进行默认的排序。

    在reduce阶段,我们的输入会变为<key , <value-list>>,我们只需要将key一一输出即可,这样就达到了数据去重的目的。思路其实很简单。

算法代码:

package com.xxx.hadoop.mapred;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
/**
 * 数据去重
 *
 */
public class DeDuplicationApp {
	
	public static class Map extends Mapper<Object, Text, Text, Text>{
		protected void map(Object key, Text value,
				Context context) throws IOException ,InterruptedException {
			context.write(value, new Text());
		};
	}
	
	public static class Reduce extends Reducer<Text, Text, Text, Text>{
		protected void reduce(Text key, java.lang.Iterable<Text> values,
				Context context) throws IOException ,InterruptedException {
			context.write(key, new Text());
		};
	}

	public static void main(String[] args) throws Exception {
		String input = "/user/root/deduplication/input",
			  output = "/user/root/deduplication/output";
		System.setProperty("HADOOP_USER_NAME", "root");
		Configuration conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.56.202:9000");
		
		Job job = Job.getInstance(conf);
		job.setJarByClass(DeDuplicationApp.class);
		job.setMapperClass(Map.class);
		job.setReducerClass(Reduce.class);
		FileInputFormat.addInputPath(job, new Path(input));
		FileOutputFormat.setOutputPath(job, new Path(output));
		
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(Text.class);
		
		System.exit(job.waitForCompletion(true)?0:1);
	}

}

原始数据:

运行程序,控制台打印信息如下:

2019-08-31 14:33:56 [INFO ]  [main]  [org.apache.hadoop.conf.Configuration.deprecation] session.id is deprecated. Instead, use dfs.metrics.session-id
2019-08-31 14:33:56 [INFO ]  [main]  [org.apache.hadoop.metrics.jvm.JvmMetrics] Initializing JVM Metrics with processName=JobTracker, sessionId=
2019-08-31 14:33:57 [WARN ]  [main]  [org.apache.hadoop.mapreduce.JobResourceUploader] Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-08-31 14:33:57 [WARN ]  [main]  [org.apache.hadoop.mapreduce.JobResourceUploader] No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2019-08-31 14:33:57 [INFO ]  [main]  [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] Total input paths to process : 3
2019-08-31 14:33:57 [INFO ]  [main]  [org.apache.hadoop.mapreduce.JobSubmitter] number of splits:3
2019-08-31 14:33:57 [INFO ]  [main]  [org.apache.hadoop.mapreduce.JobSubmitter] Submitting tokens for job: job_local1661418487_0001
2019-08-31 14:33:58 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] The url to track the job: http://localhost:8080/
2019-08-31 14:33:58 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Running job: job_local1661418487_0001
2019-08-31 14:33:58 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] OutputCommitter set in config null
2019-08-31 14:33:58 [INFO ]  [Thread-3]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 14:33:58 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2019-08-31 14:33:58 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] Waiting for map tasks
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1661418487_0001_m_000000_0
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3c3c2e72
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/deduplication/input/a.txt:0+12
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 15; bufvoid = 104857600
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214388(104857552); length = 9/6553600
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1661418487_0001_m_000000_0 is done. And is in the process of committing
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1661418487_0001_m_000000_0' done.
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1661418487_0001_m_000000_0
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1661418487_0001_m_000001_0
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 14:33:58 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@4b250759
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/deduplication/input/b.txt:0+12
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 15; bufvoid = 104857600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214388(104857552); length = 9/6553600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1661418487_0001_m_000001_0 is done. And is in the process of committing
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1661418487_0001_m_000001_0' done.
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1661418487_0001_m_000001_0
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1661418487_0001_m_000002_0
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@77ad94ef
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/deduplication/input/c.txt:0+12
2019-08-31 14:33:59 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Job job_local1661418487_0001 running in uber mode : false
2019-08-31 14:33:59 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job]  map 100% reduce 0%
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 15; bufvoid = 104857600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214388(104857552); length = 9/6553600
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1661418487_0001_m_000002_0 is done. And is in the process of committing
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1661418487_0001_m_000002_0' done.
2019-08-31 14:33:59 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1661418487_0001_m_000002_0
2019-08-31 14:33:59 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] map task executor complete.
2019-08-31 14:33:59 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] Waiting for reduce tasks
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1661418487_0001_r_000000_0
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@651b16b8
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.ReduceTask] Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@12884be
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] MergerManager: memoryLimit=1265788544, maxSingleShuffleLimit=316447136, mergeThreshold=835420480, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2019-08-31 14:33:59 [INFO ]  [EventFetcher for fetching Map Completion Events]  [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] attempt_local1661418487_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1661418487_0001_m_000002_0 decomp: 23 len: 27 to MEMORY
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 23 bytes from map-output for attempt_local1661418487_0001_m_000002_0
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 23, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->23
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1661418487_0001_m_000001_0 decomp: 23 len: 27 to MEMORY
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 23 bytes from map-output for attempt_local1661418487_0001_m_000001_0
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 23, inMemoryMapOutputs.size() -> 2, commitMemory -> 23, usedMemory ->46
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1661418487_0001_m_000000_0 decomp: 23 len: 27 to MEMORY
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 23 bytes from map-output for attempt_local1661418487_0001_m_000000_0
2019-08-31 14:33:59 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 23, inMemoryMapOutputs.size() -> 3, commitMemory -> 46, usedMemory ->69
2019-08-31 14:33:59 [INFO ]  [EventFetcher for fetching Map Completion Events]  [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] EventFetcher is interrupted.. Returning
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Merging 3 sorted segments
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Down to the last merge-pass, with 3 segments left of total size: 51 bytes
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merged 3 segments, 69 bytes to disk to satisfy reduce memory limit
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merging 1 files, 69 bytes from disk
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merging 0 segments, 0 bytes from memory into reduce
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Merging 1 sorted segments
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Down to the last merge-pass, with 1 segments left of total size: 59 bytes
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.conf.Configuration.deprecation] mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task:attempt_local1661418487_0001_r_000000_0 is done. And is in the process of committing
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task attempt_local1661418487_0001_r_000000_0 is allowed to commit now
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] Saved output of task 'attempt_local1661418487_0001_r_000000_0' to hdfs://192.168.56.202:9000/user/root/deduplication/output/_temporary/0/task_local1661418487_0001_r_000000
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] reduce > reduce
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1661418487_0001_r_000000_0' done.
2019-08-31 14:33:59 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1661418487_0001_r_000000_0
2019-08-31 14:33:59 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] reduce task executor complete.
2019-08-31 14:34:00 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job]  map 100% reduce 100%
2019-08-31 14:34:00 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Job job_local1661418487_0001 completed successfully
2019-08-31 14:34:00 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Counters: 35
	File System Counters
		FILE: Number of bytes read=4066
		FILE: Number of bytes written=1095628
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=108
		HDFS: Number of bytes written=30
		HDFS: Number of read operations=33
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=6
	Map-Reduce Framework
		Map input records=9
		Map output records=9
		Map output bytes=45
		Map output materialized bytes=81
		Input split bytes=381
		Combine input records=0
		Combine output records=0
		Reduce input groups=6
		Reduce shuffle bytes=81
		Reduce input records=9
		Reduce output records=6
		Spilled Records=18
		Shuffled Maps =3
		Failed Shuffles=0
		Merged Map outputs=3
		GC time elapsed (ms)=8
		Total committed heap usage (bytes)=1476919296
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=36
	File Output Format Counters 
		Bytes Written=30

计算之后的数据:

    这里面数据去重,其实是利用了MapReduce的特点,没有复杂的设计思路。普通的数组去重,我们会遍历数组,然后保存遍历的值,后续遍历的值如果在已经遍历过的值集合中,那么就丢弃。而MapReduce框架是一个分而治之的计算框架,分为几个阶段,每一个阶段都有自己的任务,正是这每些阶段将我们的数据做了分组和排序,所以数据就自动的去重了。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

MapReduce编程开发之数据去重 的相关文章

  • 无人驾驶虚拟仿真(四)--通过ROS系统控制小车行走

    简介 xff1a 实现键盘控制虚拟仿真小车移动 xff0c w s a d 空格 xff0c 对应向前 向后 向左 向右 急停切换功能 xff0c q键退出 1 创建key control节点 进入工作空间源码目录 xff1a cd myr
  • error while loading shared libraries的解決方法

    error while loading shared libraries的解決方法 运行程式時 xff0c 如遇到像下列這種錯誤 xff1a tests error while loading shared libraries xxx so
  • imagemagick安装方法

    1 下载ImageMagick http www imagemagick org download 下载 ImageMagick 6 8 5 10 tar gz xff0c 下载完毕后开始进行安装 cd Downloads tar xzvf
  • ubuntu中安装apache ab命令进行简单压力测试

    1 安裝ab命令 sudo apt get install apache2 utils 2 ab命令参数说明 Usage ab options http s hostname port path Options are 总的请求数 n re
  • 如何查看当前Apache的连接数

    查看了连接数和当前的连接数 netstat ant grep ip 80 wc l netstat ant grep ip 80 grep EST wc l 查看IP访问次数 netstat nat grep 34 80 34 awk 39
  • php 获取页面中的指定内容类

    功能 xff1a 1 获取内容中的url xff0c email xff0c image 2 替换内容中的url xff0c email xff0c image url xff1a lt a href 61 34 url 34 gt xxx
  • memcached启动参数

    memcached启动参数 p 指定端口号 xff08 默认11211 xff09 U lt num gt UDP监听端口 默认 11211 0 时关闭 s lt file gt 用于监听的UNIX套接字路径 xff08 禁用网络支持 xf
  • mysql常用方法

    1 CONCAT str1 str2 mysql gt SELECT CONCAT 39 My 39 39 S 39 39 QL 39 gt 39 MySQL 39 mysql gt SELECT CONCAT 39 My 39 NULL
  • shell 监控网站是否异常的脚本

    shell 监控网站是否异常的脚本 xff0c 如有异常自动发电邮通知管理员 流程 xff1a 1 检查网站返回的http code 是否等于200 xff0c 如不是200视为异常 2 检查网站的访问时间 xff0c 超过MAXLOADT
  • 文件转base64输出

    Data URI scheme是在RFC2397中定义的 xff0c 目的是将一些小的数据 xff0c 直接嵌入到网页中 xff0c 从而不用再从外部文件载入 优点 xff1a 减少http连接数 缺点 xff1a 这种格式的文件不会被浏览
  • php 支持断点续传的文件下载类

    php 支持断点续传 xff0c 主要依靠HTTP协议中 header HTTP RANGE实现 HTTP断点续传原理 Http头 Range Content Range HTTP头中一般断点下载时才用到Range和Content Rang
  • 基于Linkit 7697的红绿灯控制系统

    1 硬件准备 LinkIt 7697 1 xff0c 继电器模块 1 xff0c 面包板 1 xff0c RGB LED灯 1 xff08 共阳极 xff0c 工作电流20mA xff0c 红灯压降2 2 2V xff0c 绿灯蓝灯压降3
  • shell 记录apache status并自动更新到数据库

    1 获取apache status monitor log sh bin bash 连接数 site connects 61 netstat ant grep ip 80 wc l 当前连接数 site cur connects 61 ne
  • php 缩略图生成类,支持imagemagick及gd库两种处理

    功能 1 按比例缩小 放大 2 填充背景色 3 按区域裁剪 4 添加水印 包括水印的位置 透明度等 使用imagemagick GD库实现 imagemagick地址 www imagemagick org 需要安装imagemagick
  • php5.3 中显示Deprecated: Assigning the return value of new by reference is deprecated in 的解决方法

    今天需要将某个网站般去另一台服务器 设置好运行 xff0c 显示一大堆Deprecated Deprecated Assigning the return value of new by reference is deprecated in
  • 使用apache mod_env模块保存php程序敏感信息

    Apache模块 mod env 说明 xff1a 允许Apache修改或清除传送到CGI脚本和SSI页面的环境变量 模块名 xff1a env module 源文件 xff1a mod env c 本模块用于控制传送给CGI脚本和SSI页
  • php 根据url自动生成缩略图

    原理 xff1a 设置apache rewrite xff0c 当图片不存在时 xff0c 调用php创建图片 例如 原图路径为 xff1a http localhost upload news 2013 07 21 1 jpg 缩略图路径
  • mailto 参数说明

    mailto 可以调用系统内置软件发送电子邮件 参数说明 mailto xff1a 收件人地址 xff0c 可多个 xff0c 用 分隔 cc xff1a 抄送人地址 xff0c 可多个 xff0c 用 分隔 bcc xff1a 密件抄送人
  • mysql 导入导出数据库

    mysql 导入导出数据库 1 导出数据 导出test 数据库 R 表示导出函数和存储过程 xff0c 加上使导出更完整 mysqldump u root p R test gt test sql 导出test数据库中user表 mysql
  • php 广告加载类

    php 广告加载类 xff0c 支持异步与同步加载 需要使用Jquery ADLoader class php lt php 广告加载管理类 Date 2013 08 04 Author fdipzone Ver 1 0 Func publ

随机推荐

  • 使用<img>标签加载php文件,记录页面访问讯息

    原理 xff1a 通过 lt img gt 标标签加载php文件 xff0c php文件会使用gd库生成一张1x1px的空白透明图片返回 xff0c 并记录传递的参数写入log文件 lt img src 61 34 sitestat php
  • tput 命令行使用说明

    什么是 tput xff1f tput 命令将通过 terminfo 数据库对您的终端会话进行初始化和操作 通过使用 tput xff0c 您可以更改几项终端功能 xff0c 如移动或更改光标 更改文本属性 xff0c 以及清除终端屏幕的特
  • ROS2学习笔记(二)-- 多机通讯原理简介及配置方法

    在ROS1中由主节点 master 负责其它从节点的通信 xff0c 在同一局域网内通过设置主节点地址也可以实现多机通讯 xff0c 但是这种多机通讯网络存在一个严重的问题 xff0c 那就是所有从节点强依赖于主节点 xff0c 一旦运行主
  • 使用shell实现阿里云动态DNS

    https github com timwai aliyunDDNS shell 脚本全部使用基础的命令实现 xff0c 支持在openwrt中使用 修改以下参数为你自己的参数 ACCESS KEY ID 61 你的AccessKeyId
  • Java-两个较大的List快速取交集、差集

    工作中经常遇到需要取两个集合之间的交集 差集情况 xff0c 但是普通的retainAll 和removeAll 无法满足数据量大的情况 xff0c 由此就自己尝试运用其他的方法解决 注 xff1a 如果数据量小的情况下 xff0c 还是使
  • Xubuntu15.04更新系统源时出现错误提示W: GPG 错误:http://archive.ubuntukylin.com:10006 xenial InRelease: 由于没有公钥,无法验证

    在更新系统源后 xff0c 输入sudo apt get update之后出现提示 xff1a W GPG 错误 xff1a http archive ubuntukylin com 10006 xenial InRelease 由于没有公
  • ubuntu开启SSH服务远程登录

    ssh secure shell xff0c 提供安全的远程登录 从事嵌入式开发搭建linux开发环境中 xff0c ssh的服务的安装是其中必不可少的一步 ssh方便一个开发小组中人员登录一台服务器 xff0c 从事代码的编写 编译 运行
  • Python实现让视频自动打码,再也不怕出现少儿不宜的画面了

    人生苦短 我用Python 序言准备工作代码解析完整代码 序言 我们在观看视频的时候 xff0c 有时候会出现一些奇怪的马赛克 xff0c 影响我们的观影体验 xff0c 那么这些马赛克是如何精确的加上去的呢 xff1f 本次我们就来用Py
  • Docker安装nextcloud实验

    Docker安装nextcloud实验 修改验证方式 xff1a 从密钥到密码 sudo passwd root su root vi etc ssh sshd config 去掉下面前的 或修改yes no port 22 Address
  • Tesseract-OCR 字符识别---样本训练

    Tesseract是一个开源的OCR xff08 Optical Character Recognition xff0c 光学字符识别 xff09 引擎 xff0c 可以识别多种格式的图像文件并将其转换成文本 xff0c 目前已支持60多种
  • FPGA与OPENCV的联合仿真

    对于初学者来说 xff0c 图像处理行业 xff0c 最佳仿真方式 xff1a FPGA 43 OPENCV xff0c 因为OPENCV适合商业化 xff0c 适合自己写算法 1 xff09 中间交互数据介质 txt文档 2 xff09
  • 华硕P8Z77-V LX老主板转换卡升级NVMe M2硬盘经验,老主机的福音,质的飞跃

    每年双十一都是淘货升级老家伙的时候 xff0c 今年也不例外 xff0c 随着日子长久 xff0c 软件的增多 xff0c 虽然已经尽量装在系统盘以外的盘 xff0c 但C盘还是日渐不够用 xff0c 从以前的30G系统盘升到60G xff
  • linux 更换 软件源后 GPG错误

    linux 更换 软件源后 GPG错误 linux 软件源 GPG 签名 密钥 linux 更换 软件源后 GPG错误 http my oschina net emptytimespace blog 83633 如文章 1 中提到 xff1
  • ROS2学习笔记(四)-- 用方向键控制小车行走

    简介 xff1a 在上一节的内容中 xff0c 我们通过ROS2的话题发布功能将小车实时视频信息发布了出来 xff0c 同时使用GUI工具进行查看 xff0c 在这一节内容中 xff0c 我们学习一下如何订阅话题并处理话题消息 xff0c
  • flume大数据框架数据采集系统

    flume是cloudera开源的数据采集系统 xff0c 现在是apache基金会下的子项目 xff0c 他是hadoop生态系统的日志采集系统 xff0c 用途广泛 xff0c 可以将日志 网络数据 kafka消息收集并存储在大数据hd
  • flume日志收集系统常见配置

    前面介绍了flume入门实例 xff0c 介绍了配置netcat信源 xff0c 以及memory信道 xff0c logger信宿 xff0c 其实flume常见的信源信道信宿有很多 xff0c 这里介绍flume常用信源的三种方式 xf
  • flume自定义拦截器实现定制收集日志需求

    flume默认提供了timestamp host static regex等几种类型的拦截器 xff0c timestamp host static等拦截器 xff0c 其实就是在消息头中增加了时间戳 xff0c 主机名 xff0c 键值对
  • Eclipse开发mapreduce程序环境搭建

    Eclipse作为一个常用的java IDE xff0c 其使用程度虽然比不上idea那么强大 xff0c 但是对于习惯使用eclipse开发的人来说 xff0c 也不失为一个可以选择的IDE 对于喜欢eclipse开发的人来说 xff0c
  • hdfs常见操作java示例

    我们学习hadoop xff0c 最常见的编程是编写mapreduce程序 xff0c 但是 xff0c 有时候我们也会利用java程序做一些常见的hdfs操作 比如删除一个目录 xff0c 新建一个文件 xff0c 从本地上传一个文件到h
  • MapReduce编程开发之数据去重

    MapReduce就是一个利用分而治之的思想做计算的框架 xff0c 所谓分 xff0c 就是将数据打散 xff0c 分成可以计算的小份 xff0c 治就是将数据合并 xff0c 相同键的数据合并成一个集合 MapReduce并不能解决所有