Course Home | Class Schedule | Assignments | Git Submission | Perusall | Python | Tutoring

Saint Louis University

Computer Science 1300/5001
Introduction to Object-Oriented Programming

Michael Goldwasser

Fall 2018

Computer Science Department

Programming Assignment 04

Programming Contest

Due: 11:59pm, Monday, 15 October 2018


Contents:


Collaboration Policy

For this assignment, you must work individually in regard to the design and implementation of your project.

Please make sure you adhere to the policies on academic integrity in this regard.


Overview

Rather than one large program, this assignment involves a series of smaller challenges. All of these are problems that were used as part of the ACM International Collegiate Programming Contest (ICPC). Each Fall, teams from SLU compete in the Mid-Central Regional qualifier (details).

Each problem is computational in nature, with the goal being to compute a specific output based on some input parameters. Each problem defines a clear and unambiguous form for the expected input and desired output. Relevant bounds on the size of the input are clearly specified. To be successful, the program must complete within 60 seconds on the given machine.

Each problem description offers a handful of sample inputs and the expected output for those trials as a demonstration. Behind the scene, the judges often have hundreds of additional tests. Submitted programs are "graded" by literally running them on all of the judges' tests, capturing the output, and comparing whether the output is identical (character-for-character) to the expected output.

If the test is successful, the team gets credit for completing the problem. If the test fails, the team is informed of the failure and allowed to resubmit (with a slight penalty applied). However, the team receives very little feedback from the judges. In essence, they are told that it failed but given no explanation as to the cause of the problem, or even the data set that leads to the problem.

Actually, the feedback is slightly more informative. Upon submitting a program, the team formally receives one of the following responses:


Logistics

Because of the automation in judging contests, the programs your write for this assignment will be somewhat different from others.
  1. SCORING: Contest problems are typically scored in "all or nothing" fashion in that your program must produce precisely the character-for-character output that matches the expectations given in the problem description.

    As a course assignment, you should submit your code whether you solved the problem or not, as we will give partial credit for the attempt. But for full credit you will need to have correctly solved the problem in accordance with the contest rules.

  2. SOURCE CODE: The source code for the required problem must be named as indicated in the problem specification (e.g. gnome.py for the first challenge).

  3. INPUT: Because the automated scoring is looking at your output, it is important that you not display any prompts to the user when seeking input. You should just assume that the user will type input that adheres to the specifications given in a problem. Also, you do not need to error-check any of the input. You should assume it is legitimate.

    For flexibility, we will allow your script to read input in one of two ways:

  4. TESTING: If on our department's system, you may pre-test your program against the judge's data by typing the following command at a console terminal from within the same folder in which your source code exists:

    /public/goldwasser/1300/contest/judge gnome.py
    Our automated program is not quite as professional as the real judges, but it will do. In particular it does not automatically terminate after 60 seconds elapse. In fact, it never terminates. It will tell you when it starts executing. If too much time has passed, you may press cntr-C to kill the process yourself.

    Also, a correct distinction between "wrong output" and "presentation error" is difficult to automate. We've made a decent attempt to look for common presentation errors, but in case of doubt, we will report it as a "wrong output".

    If you wish to otherwise test on your own system, files that match the published sample data can be downloaded here, or you can get them on hopper by issuing the command

    	cp -Rp /public/goldwasser/1300/contest/inputs .

    Of course, you are also welcome to add your own tests cases to the input file to more thoroughly test your program. (The judge's certainly will!)


A Practice Problem

Adding Numbers (add.py)


The Challenges


Submitting Your Assignment

Please submit separate files for each problem (gnome.py, dup.py, rps.py, speed.py)

You should also submit one 'readme' text file, that serves as a summary for the entire assignment, and estimates how much time you spent on each of the challenges.

NOTE: even if you successfully tested your program using the automated judge on hopper, you are still responsible for submitting your source code through our standard submission system for assignments.

Please see details regarding the submission process from the general programming web page, as well as a discussion of the late policy.


Grading Standards

The assignment is worth 40 points (10 points per problem).


Extra Credit

Time for one more challenge? We will award an extra point if you solve Easier Done than Said? (say.py)


Michael Goldwasser
Last modified: Thursday, 11 October 2018