Saint Louis University |
Computer Science 150
|
Dept. of Math & Computer Science |
For this assignment, you are allowed to work with one other student if you wish (in fact, we suggest that you do so). If any student wishes to have a partner but has not been able to locate one, please let the instructor know so that we can match up partners.
Please make sure you adhere to the policies on academic integrity in this regard.
Rather than one large program, this assignment involves a series of smaller challenges. All of these are problems that were used as part of the ACM International Collegiate Programming Contest (ICPC). Each Fall, teams from SLU compete in the Mid-Central Regional qualifier (details).
Each problem is computational in nature, with the goal being to compute a specific output based on some input parameters. Each problem defines a clear and unambiguous form for the expected input and desired output. Relevant bounds on the size of the input are clearly specified. To be successful, the program must complete within 60 seconds on the given machine.
Each problem description offers a handful of sample inputs and the expected output for those trials as a demonstration. Behind the scene, the judges often have hundreds of additional tests. Submitted programs are "graded" by literally running them on all of the judges' tests, capturing the output, and comparing whether the output is identical (character-for-character) to the expected output.
If the test is successful, the team gets credit for completing the problem. If the test fails, the team is informed of the failure and allowed to resubmit (with a slight penalty applied). However, the team receives very little feedback from the judges. In essence, they are told that it failed but given no explanation as to the cause of the problem, or even the data set that leads to the problem.
Actually, the feedback is slightly more informative. Upon submitting a program, the team formally receives one of the following responses:
Success
Submission Error
This is reported if the submitted program does not properly
compile, is not properly named, or is clearly an attempt at a
different problem.
Run-time Error
This is reported if the program crashes during execution.
Wrong Answer
This designates that the program ran to completion, but the
content of the output does not match the expected results.
Presentation Error
In spirit, there are some cases where the students got the
wrong output not because their computations were wrong, but
due to a superficial problem in formatting their output. This
can occur if they misspell words, use incorrect or missing
punctuation, capitalize incorrectly, use too few or too many
spaces or extra blank lines, or present the wrong number of
significant digits in numeric output.
manager = file('gnome.in') # creates a file managerwhere the precise file name to be used is specified in the problem statement (e.g., gnome.in). Subsequently, you can read a line at a time from that file using the syntax
line = manager.readline() # returns the string of characters
Testing on sample data
Since each problem specification includes at least a few
sample cases of input and the expected output, there is no
reason to submit your program to the judges until you are
confident that it succeeds on those sample inputs. To help
you out, we have already typed up those sample input files;
they are available for download here, or
you can get them on turing by issuing the command
cp -Rp /Public/goldwasser/150/hw04 .
Of course, you are also welcome to add your own tests cases to the input file to more thoroughly test your program.
Testing on judge's data (on turing)
The judges for the contest will test programs on many cases
beyond those that were given as samples. Fortunately, we have
all of the judges' tests available on our system.
When you have the program working on the sample data and you
wish to test it on the judges hidden data, you may execute the
following command from your turing account.
/Public/goldwasser/150/contest/judge gnome.py(of course, using the actual name of the source code file). Our automated program is not quite as professional as the real judges, but it will do. In particular it does not automatically terminate after 60 seconds elapse. In fact, it never terminates. It will tell you when it starts executing. If too much time has passed, you may press cntr-C to kill the process yourself.
Also, a correct distinction between "wrong output" and "presentation error" is difficult to automate. We've made a decent attempt to look for common presentation errors, but in case of doubt, we will report it as a "wrong output".
Although "homeworks" are typically turned in on paper, for this assignment we would like you to test your programs on the judge's data and to formally submit all of your source code through the course web page. We have created a folder named hw04 for this purpose. Please submit separate files for each problem (gnome.py, dup.py, rps.py, speed.py)
You should also submit a separate 'readme' text file, detailing how much time you spent on each of the challenges, and making sure to credit both students if working as a pair.
The assignment is worth 10 points.
Time for one more challenge? We will award an extra point if you solve Easier Done than Said? (say.py)