在 hadoop 上解析 Stackoverflow 的 posts.xml

Parsing of Stackoverflow`s posts.xml on hadoop(在 hadoop 上解析 Stackoverflow 的 posts.xml)
本文介绍了在 hadoop 上解析 Stackoverflow 的 posts.xml的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我正在关注这篇文章由 Anoop Madhusudanan 在 codeproject 上构建,而不是在集群上而是在我的系统上构建推荐引擎.

I am following this article by Anoop Madhusudanan on codeproject to build a recommendation engine not on cluster but on my system.

问题是当我尝试解析结构如下的posts.xml时:

Problem is when i try to parse posts.xml whose structure is as follows:

 <row Id="99" PostTypeId="2" ParentId="88" CreationDate="2008-08-01T14:55:08.477" Score="2" Body="&lt;blockquote&gt;&#xD;&#xA;  &lt;p&gt;The actual resolution of gettimeofday() depends on the hardware architecture. Intel processors as well as SPARC machines offer high resolution timers that measure microseconds. Other hardware architectures fall back to the system’s timer, which is typically set to 100 Hz. In such cases, the time resolution will be less accurate. &lt;/p&gt;&#xD;&#xA;&lt;/blockquote&gt;&#xD;&#xA;&#xD;&#xA;&lt;p&gt;I obtained this answer from &lt;a href=&quot;http://www.informit.com/guides/content.aspx?g=cplusplus&amp;amp;seqNum=272&quot; rel=&quot;nofollow&quot;&gt;High Resolution Time Measurement and Timers, Part I&lt;/a&gt;&lt;/p&gt;" OwnerUserId="25" LastActivityDate="2008-08-01T14:55:08.477" />

现在我需要在 hadoop 上解析这个文件(大小 1.4 gb),我已经用 java 编写了代码并创建了它的 jar.Java类如下:

Now I need to parse this file(size 1.4 gb) on hadoop for which i have written code in java and created its jar. Java class is as follows:

import java.io.IOException;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.DocumentBuilder;
import org.w3c.dom.Document;
import org.w3c.dom.NodeList;
import org.w3c.dom.Node;
import org.w3c.dom.Element;

import java.io.File;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.Job;


public class Recommend {

    static class Map extends Mapper<Text, Text, Text, Text> {
        Path path;
        String fXmlFile;
        DocumentBuilderFactory dbFactory;
        DocumentBuilder dBuilder;
        Document doc;

        /**
         * Given an output filename, write a bunch of random records to it.
         */
        public void map(LongWritable key, Text value,
                OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
            try{
                fXmlFile=value.toString();
                dbFactory = DocumentBuilderFactory.newInstance();
                dBuilder= dbFactory.newDocumentBuilder();
                doc= dBuilder.parse(fXmlFile);

                doc.getDocumentElement().normalize();
                NodeList nList = doc.getElementsByTagName("row");

                for (int temp = 0; temp < nList.getLength(); temp++) {

                    Node nNode = nList.item(temp);
                    Element eElement = (Element) nNode;

                    Text keyWords =new Text(eElement.getAttribute("OwnerUserId"));
                    Text valueWords = new Text(eElement.getAttribute("ParentId"));
                    String val=keyWords.toString()+" "+valueWords.toString();
                    // Write the sentence 
                    if(keyWords != null && valueWords != null){
                        output.collect(keyWords, new Text(val));
                    }
                }

            }catch (Exception e) {
                e.printStackTrace();
            } 
        }
    }

    /**
     * 
     * @throws IOException 
     */
    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        //String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
        /*if (args.length != 2) {
          System.err.println("Usage: wordcount <in> <out>");
          System.exit(2);
        }*/
//      FileSystem fs = FileSystem.get(conf);
        Job job = new Job(conf, "Recommend");
        job.setJarByClass(Recommend.class);
        
        // the keys are words (strings)
        job.setOutputKeyClass(Text.class);
        job.setMapOutputKeyClass(LongWritable.class);
        job.setMapOutputValueClass(Text.class);
        
        // the values are counts (ints)
        job.setOutputValueClass(Text.class);

        job.setMapperClass(Map.class);
        //conf.setReducerClass(Reduce.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
       
        System.exit(job.waitForCompletion(true) ? 0 : 1);
         Path outPath = new Path(args[1]);
            FileSystem dfs = FileSystem.get(outPath.toUri(), conf);
            if (dfs.exists(outPath)) {
            dfs.delete(outPath, true);
            }
    }
}

我希望输出是 hadoop 中的一个文件,其中包含输出为 OwnerUserId ParentId但我得到的输出为:

I expect the output to be as a file in hadoop containing output as OwnerUserId ParentId but instead I get output as:

1599788   <row Id="2292" PostTypeId="2" ParentId="2284" CreationDate="2008-08-05T13:28:06.700" Score="0" ViewCount="0" Body="&lt;p&gt;The first thing you should do is contact the main people who run the open source project. Ask them if it is ok to contribute to the code and go from there.&lt;/p&gt;&#xD;&#xA;&#xD;&#xA;&lt;p&gt;Simply writing your improved code and then giving it to them may result in your code being rejected.&lt;/p&gt;" OwnerUserId="383" LastActivityDate="2008-08-05T13:28:06.700" />

我不知道 1599788 的起源作为映射器的键值出现.

I dont know about the origin of 1599788 appearing as a key value from mapper.

我不太了解为 hadoop 编写映射器类,我需要帮助来修改我的代码以获得所需的输出.

I don`t know much about writing mapper classes for hadoop, I need help to modify my code to get the desired output.

提前致谢.

推荐答案

经过大量的研究和实验,终于学会了为parsin xml文件写map的方法,语法和我提供的一样.我改变了我的方法,这是我的新映射器代码......它适用于我的用例.

After a lot of research and experiments , finally learnt the way to write map for parsin xml files which have syntax like one I provided. I changed my approach and this is my new mapper code... Its workin for my usecase.

希望它可以帮助某人,他们可以节省时间:)

hope it help someone and they can save their time :)

import java.io.IOException;
import java.util.StringTokenizer;

import javax.xml.parsers.ParserConfigurationException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.xml.sax.SAXException;

public class Map extends Mapper<LongWritable, Text, NullWritable, Text> {
    NullWritable obj;

    @Override
    public void map(LongWritable key, Text value, Context context) throws InterruptedException {
        StringTokenizer tok= new StringTokenizer(value.toString()); 
        String pa=null,ow=null,pi=null,v;
        while (tok.hasMoreTokens()) {
            String[] arr;
            String val = (String) tok.nextToken();
            if(val.contains("PostTypeId")){
                arr= val.split("["]");
                pi=arr[arr.length-1];
                if(pi.equals("2")){
                    continue;
                }
                else break;
            }
            if(val.contains("ParentId")){
                arr= val.split("["]");
                pa=arr[arr.length-1];
            } 
            else if(val.contains("OwnerUserId") ){
                arr= val.split("["]");
                ow=arr[arr.length-1];
                try {
                    if(pa!=null && ow != null){
                        v=String.format("{0},{1}", ow,pa);
                        context.write(obj,new Text(v));

                    }
                } catch (IOException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }
        }


    }

}

这篇关于在 hadoop 上解析 Stackoverflow 的 posts.xml的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

相关文档推荐

Sending a keyboard event from java to any application (on-screen-keyboard)(将键盘事件从 java 发送到任何应用程序(屏幕键盘))
How to make JComboBox selected item not changed when scrolling through its popuplist using keyboard(使用键盘滚动其弹出列表时如何使 JComboBox 所选项目不更改)
Capturing keystrokes without focus(在没有焦点的情况下捕获击键)
How can I position a layout right above the android on-screen keyboard?(如何将布局放置在 android 屏幕键盘的正上方?)
How to check for key being held down on startup in Java(如何检查在Java中启动时按住的键)
Android - Get keyboard key press(Android - 获取键盘按键)