Difference between revisions of "CSC352 Homework 3"
(→Problem Statement) |
|||
(10 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
<bluebox> | <bluebox> | ||
− | The class decided on the contents of this homework, and its due date: March 30th. | + | The class decided on the contents of this homework, and its due date: March 30th. It lays the groundwork for [[CSC352_Project_2 | Project #2]]. <font color="red">Feel free to work in pairs.</font> |
</bluebox> | </bluebox> | ||
Line 16: | Line 16: | ||
Process ''N'' wiki pages, and for each one | Process ''N'' wiki pages, and for each one | ||
* keep track of the categories contained in the page | * keep track of the categories contained in the page | ||
− | * find the 5 most frequent words (not including stop words) | + | * find the 5 most frequent words (not including stop words) '''in the page'''. |
* associate with each category the most frequent words that have been associated with it over the ''N'' pages processed | * associate with each category the most frequent words that have been associated with it over the ''N'' pages processed | ||
* output the result (or a sample of it) | * output the result (or a sample of it) | ||
* measure the execution time of the program | * measure the execution time of the program | ||
− | * <font color="red"><strike>write a summary of it as illustrated in the [http://cs.smith.edu/~thiebaut/freevideos/WhatsInAProject.swf ''guidelines''] presented in class (3/9, 3/11).</strike></font> | + | * <font color="red"><strike>write a summary of it as illustrated in the [http://cs.smith.edu/~thiebaut/freevideos/WhatsInAProject.swf ''guidelines''] presented in class (3/9, 3/11).</strike> (Removed --[[User:Thiebaut|D. Thiebaut]] 13:15, 10 March 2010 (UTC))</font> |
− | * <font color="red">(''Addition --[[User:Thiebaut|D. Thiebaut]] 13:15, 10 March 2010 (UTC)'') For this homework, concentrate on the programming, and leave the formatting and analysis part for the project.</font> | + | * <font color="red">(''Addition --[[User:Thiebaut|D. Thiebaut]] 13:15, 10 March 2010 (UTC)'') For this homework, concentrate on the programming, and leave the formatting, graphing, and analysis part for the project. You should still report a table of measurements, which you can include in the header of the program or in a pdf.</font> |
==Details== | ==Details== | ||
Line 44: | Line 44: | ||
==Submission== | ==Submission== | ||
− | Please submit | + | Please submit your program(s), including everything needed to make it/them work (that includes files of stop words!). If you reported your measurements in a pdf, please include it as wel! |
submit hw3 ''yourfile1'' | submit hw3 ''yourfile1'' | ||
Line 51: | Line 51: | ||
==Misc. Information== | ==Misc. Information== | ||
+ | |||
* Remember that this is the first part of Project 2. You may discover that there is something important that controls the performance of your program, but you may not have time to fully explore/develop a solution in this homework. Make sure you mention it in your report, indicating that this is something that will be useful to incorporate in the project. | * Remember that this is the first part of Project 2. You may discover that there is something important that controls the performance of your program, but you may not have time to fully explore/develop a solution in this homework. Make sure you mention it in your report, indicating that this is something that will be useful to incorporate in the project. | ||
+ | |||
+ | ==Accessing Wiki Pages== | ||
+ | |||
+ | This is a two-step process. First we need to get a number of Page Ids. For example, if we just want 10 pages, we request the following Url: | ||
+ | |||
+ | :::http://xgridmac.dyndns.org/cgi-bin/getWikiPageById.cgi?Count=10 | ||
+ | |||
+ | The output will be: | ||
+ | |||
+ | 10000 | ||
+ | 10050000 | ||
+ | 10070000 | ||
+ | 10140000 | ||
+ | 10200000 | ||
+ | 10230000 | ||
+ | 1030000 | ||
+ | 10320000 | ||
+ | 1040000 | ||
+ | 10430000 | ||
+ | |||
+ | To get the page with Id 1000, for example, then we access the Web server at the same address, but with a different ''request'': | ||
+ | |||
+ | :::http://xgridmac.dyndns.org/cgi-bin/getWikiPageById.cgi?Id=1000 | ||
+ | |||
+ | The output is: | ||
+ | |||
+ | <code><pre> | ||
+ | <xml> | ||
+ | <title>Hercule Poirot</title> | ||
+ | <id>1000</id> | ||
+ | <contributors> | ||
+ | <contrib> | ||
+ | <username>TXiKiBoT</username> | ||
+ | <id>3171782</id> | ||
+ | |||
+ | <length>51946</length></contrib> | ||
+ | </contributors> | ||
+ | <categories> | ||
+ | <cat>Hercule Poirot</cat> | ||
+ | <cat>Fictional private investigators</cat> | ||
+ | <cat>Series of books</cat> | ||
+ | <cat>Hercule Poirot characters</cat> | ||
+ | <cat>Fictional Belgians</cat> | ||
+ | </categories> | ||
+ | <pagelinks> | ||
+ | <page></page> | ||
+ | <page>16 July</page> | ||
+ | <page>1916</page> | ||
+ | <page>1989</page> | ||
+ | <page>2011</page> | ||
+ | <page>A. E. W. Mason</page> | ||
+ | <page>Academy Award</page> | ||
+ | <page>Agatha Christie</page> | ||
+ | <page>Agatha Christie Hour</page> | ||
+ | <page>Agatha Christie's Great Detectives Poirot and Marple</page> | ||
+ | <page>Agatha Christie's Poirot</page> | ||
+ | ... | ||
+ | <page>parody</page> | ||
+ | <page>private detective</page> | ||
+ | <page>refugee</page> | ||
+ | <page>retroactive continuity</page> | ||
+ | <page>turnip pocket watch</page> | ||
+ | </pagelinks> | ||
+ | <text> | ||
+ | . Belgium Belgian . occupation = Private Dectective. Former Retired DetectiveFormer Police Police officer officer . | ||
+ | ... (lot's of text removed here...) | ||
+ | . Hercule Poirot . uk. Еркюль Пуаро . vi. Hercule Poirot . zh. 赫丘勒·白羅 . | ||
+ | </text> | ||
+ | </xml> | ||
+ | </pre></code> | ||
+ | |||
+ | In general, the page will have several sections, coded in XML, and always in the same order: | ||
+ | * the title, in '''<title>''' tags, | ||
+ | * the contributor, in '''<contributor>''' tag, | ||
+ | * the categories the page belongs to, in '''<categories>''' and '''<cat>''' tags, | ||
+ | * the links to other wikipedia pages the page contains, in '''<pagelinks>''' and '''<page>''' tags, | ||
+ | * the text of the page, with all the html and wiki tags removed, between '''<text>''' tags. | ||
+ | |||
+ | The end of the text section always contains foreign characters. The text should be coded in UTF-8, which is the international character set, of which ASCII is a variant. | ||
+ | |||
+ | ===CGI Program=== | ||
+ | <onlysmith> | ||
+ | Just for information, the CGI program that processes the request is available [[CSC352 getWikiPageById.cgi | here]]. | ||
+ | </onlysmith> | ||
<br /> | <br /> | ||
Latest revision as of 10:02, 23 March 2010
Programming the XGrid
The class decided on the contents of this homework, and its due date: March 30th. It lays the groundwork for Project #2. Feel free to work in pairs.
Contents
Problem Statement
Process N wiki pages, and for each one
- keep track of the categories contained in the page
- find the 5 most frequent words (not including stop words) in the page.
- associate with each category the most frequent words that have been associated with it over the N pages processed
- output the result (or a sample of it)
- measure the execution time of the program
-
write a summary of it as illustrated in the guidelines presented in class (3/9, 3/11).(Removed --D. Thiebaut 13:15, 10 March 2010 (UTC)) - (Addition --D. Thiebaut 13:15, 10 March 2010 (UTC)) For this homework, concentrate on the programming, and leave the formatting, graphing, and analysis part for the project. You should still report a table of measurements, which you can include in the header of the program or in a pdf.
Details
Wiki Pages
The details of how to obtain the Ids of wiki pages, and fetch wiki pages is presented in the XGrid Lab 2.
XGrid Submission
You are free to use asynchronous (Lab 1) jobs or batch (Lab 2) jobs to submit jobs to the XGrid. One might be better than the other, but having the class try several different approaches might be good as a group approach. Note that Batch processing is easier given that Lab 2 is already an example of ways to process wiki pages.
XGrid Controller
You will use the XgridMac controller for this homework, unless the XGrid in Bass becomes available early enough.
Performance Measure
Two performance measures are obvious candidates: The total execution time, and the number of wiki pages per unit time (per second, or per minute if we have a slow implementation).
Your assignment is to compute the number of pages processed per unit of time as a function of the number of processors. The complexity here is that this number is not fixed, and we cannot easily control it (although we could always go to FH341 and turn some machines ON or OFF!). Sometimes the number of processors available will be 14, sometimes 16, sometimes 20...
You will also need to figure out a way to pick the right number of pages per block of pages processed in one swoop by your program. In other words, to process 1000 wiki pages, you could gnereate 100 tasks that can run in parallel, where each task runs on 1 processor and parses 10 different pages. Or you could create 10 tasks processing 100 pages each. Make sure you explain why you pick a particular approach.
Submission
Please submit your program(s), including everything needed to make it/them work (that includes files of stop words!). If you reported your measurements in a pdf, please include it as wel!
submit hw3 yourfile1 submit hw3 yourfile2 submit hw3 etc...
Misc. Information
- Remember that this is the first part of Project 2. You may discover that there is something important that controls the performance of your program, but you may not have time to fully explore/develop a solution in this homework. Make sure you mention it in your report, indicating that this is something that will be useful to incorporate in the project.
Accessing Wiki Pages
This is a two-step process. First we need to get a number of Page Ids. For example, if we just want 10 pages, we request the following Url:
The output will be:
10000 10050000 10070000 10140000 10200000 10230000 1030000 10320000 1040000 10430000
To get the page with Id 1000, for example, then we access the Web server at the same address, but with a different request:
The output is:
<xml>
<title>Hercule Poirot</title>
<id>1000</id>
<contributors>
<contrib>
<username>TXiKiBoT</username>
<id>3171782</id>
<length>51946</length></contrib>
</contributors>
<categories>
<cat>Hercule Poirot</cat>
<cat>Fictional private investigators</cat>
<cat>Series of books</cat>
<cat>Hercule Poirot characters</cat>
<cat>Fictional Belgians</cat>
</categories>
<pagelinks>
<page></page>
<page>16 July</page>
<page>1916</page>
<page>1989</page>
<page>2011</page>
<page>A. E. W. Mason</page>
<page>Academy Award</page>
<page>Agatha Christie</page>
<page>Agatha Christie Hour</page>
<page>Agatha Christie's Great Detectives Poirot and Marple</page>
<page>Agatha Christie's Poirot</page>
...
<page>parody</page>
<page>private detective</page>
<page>refugee</page>
<page>retroactive continuity</page>
<page>turnip pocket watch</page>
</pagelinks>
<text>
. Belgium Belgian . occupation = Private Dectective. Former Retired DetectiveFormer Police Police officer officer .
... (lot's of text removed here...)
. Hercule Poirot . uk. Еркюль Пуаро . vi. Hercule Poirot . zh. 赫丘勒·白羅 .
</text>
</xml>
In general, the page will have several sections, coded in XML, and always in the same order:
- the title, in <title> tags,
- the contributor, in <contributor> tag,
- the categories the page belongs to, in <categories> and <cat> tags,
- the links to other wikipedia pages the page contains, in <pagelinks> and <page> tags,
- the text of the page, with all the html and wiki tags removed, between <text> tags.
The end of the text section always contains foreign characters. The text should be coded in UTF-8, which is the international character set, of which ASCII is a variant.
CGI Program