Saint Louis University |
Computer Science 1300/5001
|
Computer Science Department |
For this assignment, you must work individually in regard to the design and implementation of your project.
Please make sure you adhere to the policies on academic integrity in this regard.
Rather than one large program, this assignment involves a series of smaller challenges. All of these are problems that were used as part of the ACM International Collegiate Programming Contest (ICPC). Each Fall, teams from SLU compete in the Mid-Central Regional qualifier (details).
Each problem is computational in nature, with the goal being to compute a specific output based on some input parameters. Each problem defines a clear and unambiguous form for the expected input and desired output. Relevant bounds on the size of the input are clearly specified. To be successful, the program must complete within 60 seconds on the given machine.
Each problem description offers a handful of sample inputs and the expected output for those trials as a demonstration. Behind the scene, the judges often have hundreds of additional tests. Submitted programs are "graded" by literally running them on all of the judges' tests, capturing the output, and comparing whether the output is identical (character-for-character) to the expected output.
If the test is successful, the team gets credit for completing the problem. If the test fails, the team is informed of the failure and allowed to resubmit (with a slight penalty applied). However, the team receives very little feedback from the judges. In essence, they are told that it failed but given no explanation as to the cause of the problem, or even the data set that leads to the problem.
Actually, the feedback is slightly more informative. Upon submitting a program, the team formally receives one of the following responses:
Success
Submission Error
This is reported if the submitted program does not properly
compile, is not properly named, or is clearly an attempt at a
different problem.
Run-time Error
This is reported if the program crashes during execution.
Wrong Answer
This designates that the program ran to completion, but the
content of the output does not match the expected results.
Presentation Error
In spirit, there are some cases where the students got the
wrong output not because their computations were wrong, but
due to a superficial problem in formatting their output. This
can occur if they misspell words, use incorrect or missing
punctuation, capitalize incorrectly, use too few or too many
spaces or extra blank lines, or present the wrong number of
significant digits in numeric output.
SCORING: Contest problems are typically scored in "all or nothing" fashion in that your program must produce precisely the character-for-character output that matches the expectations given in the problem description.
As a course assignment, you should submit your code whether you solved the problem or not, as we will give partial credit for the attempt. But for full credit you will need to have correctly solved the problem in accordance with the contest rules.
SOURCE CODE: The source code for the required problem must be named as indicated in the problem specification (e.g. gnome.py for the first challenge).
INPUT: Because the automated scoring is looking at your output, it is important that you not display any prompts to the user when seeking input. You should just assume that the user will type input that adheres to the specifications given in a problem. Also, you do not need to error-check any of the input. You should assume it is legitimate.
For flexibility, we will allow your script to read input in one of two ways:
You may read it using the standard input() function, but again you must not provide any prompt when using this function. In this case, to test your program you will need to type the input on the keyboard or you could copy/paste from another source into the window where input was expected while the program is running.
One disadvantage is that it requires a lot of typing. Also, it becomes a bit more difficult to differentiate your generated output when it might be intermingled with the input.
As an alternative approach, we will allow you to pre-type your input into a saved file, and then use that saved file as the input to your program. This has an advantage in that you can test and retest without having to retype (or copy) the input, and it will make the results cleaner in that only your output will be displayed (not the contents of the file). But to use this approach, we need to quickly teach you a bit about working with files. Also, for our automated testing to work, the name of that file must be fixed, so we will use the convention that it should be a .txt file with the same prefix as the source code (e.g. gnome.txt for the first problem).
To get input from a file, you will need to first create a file object in Python using the following syntax:
manager = open('gnome.txt')You can subsequently use that object to get lines of input, but rather than the standard input() function you should make a call to
line = manager.readline() # returns the string of characters on the line, including the final newlineOnce you have that line of input as a string, you can do whatever you want with it (just as you can do with the result of the input() function). And you can make an additional call to manager.readline() each time you want an additional line of input.
TESTING: If on our department's system, you may pre-test your program against the judge's data by typing the following command at a console terminal from within the same folder in which your source code exists:
/public/goldwasser/1300/contest/judge gnome.pyOur automated program is not quite as professional as the real judges, but it will do. In particular it does not automatically terminate after 60 seconds elapse. In fact, it never terminates. It will tell you when it starts executing. If too much time has passed, you may press cntr-C to kill the process yourself.
Also, a correct distinction between "wrong output" and "presentation error" is difficult to automate. We've made a decent attempt to look for common presentation errors, but in case of doubt, we will report it as a "wrong output".
If you wish to otherwise test on your own system, files that match the published sample data can be downloaded here, or you can get them on hopper by issuing the command
cp -Rp /public/goldwasser/1300/contest/inputs .
Of course, you are also welcome to add your own tests cases to the input file to more thoroughly test your program. (The judge's certainly will!)
Please submit separate files for each problem (gnome.py, dup.py, rps.py, speed.py)
You should also submit one 'readme' text file, that serves as a summary for the entire assignment, and estimates how much time you spent on each of the challenges.
NOTE: even if you successfully tested your program using the automated judge on hopper, you are still responsible for submitting your source code through our standard submission system for assignments.
Please see details regarding the submission process from the general programming web page, as well as a discussion of the late policy.
The assignment is worth 40 points (10 points per problem).
Time for one more challenge? We will award an extra point if you solve Easier Done than Said? (say.py)