# coderate: a meetup event idea where you rate and criticise each other's code.

i wrote about a code related meetup in @/codeclub previously. that post was a bit vague about the details on the structure of the event. i thought a little bit longer about this and came up with a better idea.

my idea is based on the following two observations:

so the idea is this:

and that's it. you code, rate, comment, improve until the event ends. while the solving time and ratings can be used to make the event competitive, that is not the goal. they are just metrics to give some meaning to the event. it's all about learning together through code reviews.

# evaluation

to make things exciting, i need a service that can evaluate the solutions for correctness. but i really want to keep things simple. i'm not really concerned about cheating so i'd push the evaluation to the client side. coderate comes with a command line tool that can run the user's solution on their machine and tell whether it was correct or not. it fetches the input data from the online service and runs the solution against it. the solutions will run under a strict time limit of 2 seconds. if the solution is incorrect, it will tell the participant why or just show a diff between the wanted and got output.

to keep the interface simple, the solution must consist of one file named after the problem's name. it must read the input data from standard input and write the solution to standard output. assuming the user is working on `problemx`, they can run the evaluator as a shell command like this:

  coderate eval problemx.go

this will compile their program and run it on all the inputs. it will provide detailed memory and timing stats for each input.

the user can also run the evaluator for one specific input in case they are debugging one:

  coderate eval problemx.cc 07

they can also "watch" a file. in that mode coderate detects whenever the file changed and immediately reruns the solution on all inputs. the current state would be always presented in a nice, colorful ascii table in the terminal. the user just needs to edit the code and the rest is dealt with.

  coderate watch problemx.py

coderate will come with recipes for most languages. but it should be easy to add custom rules. each recipe needs an optional build step and a run command. the percent sign would be always be replaced with the problem's name. examples:

  coderate setrecipe ext=c build="gcc -Wall -Wextra -O2 %.c -o %" run="./%"
  coderate setrecipe ext=go run="go run %.go"
  coderate setrecipe ext=py run="python py %.go"

in go's example the separate build command is omitted. the downside of this is that go would rebuild the file for each input so the solution stats would be off. a better recipe would be this:

  coderate setrecipe ext=go build="go build %.go" run="./%"

# upload

when the user is done with a problem and is ready to share the solution, they have to upload it. but before they can do it, they have to "join" the current event. since the system would allow multiple simultaneous events, each event would have a secret identifier. the organizer would share this id with the participants beforehand. then users can use that to select where to upload their solutions:

  coderate join [event_id]

subsequent upload commands will then be associated with the given event. the user would upload their solution like this:

  coderate upload problemx.java

the tool would rerun the solution on the inputs, gather stats, and upload the solution along with those stats. sure, the stats will not be comparable along the different users due to the differences between their computers. however the reviewers can rerun the author's latest code on their machine if they desire so:

  coderate check alice/problemx.rs

the authors can upload multiple versions of their code as their code improves. if the reviewer wants to rerun specific testcase on a specific version, they can do so. e.g. to rerun bob's 3rd attempt on testcase 07:

  coderate check bob/problemx.js/3 07

# review

coderate's website will present a grid. each row contains a user and each column is a problem. i expect more users than problems so this is the sensible layout.

each cell will be a short status summary of the user's solution for the given problem. the cell's background color is:

the cell's text is 3 numbers: average rating (1-5 scale like on amazon), number of raters, number of open comments.

the reviewer clicks on a cell and coderate presents solution page. the page has several tabs they can select:

the reviewer can highlight parts of the code and start a comment thread about it. or they can start a comment thread without any context too. or they add another comment to an existing comment thread.

the first line of each comment is special. it's meant to be the "action summary" and can be at most 80 characters. if that line ends with ! or ? the comment thread is open and closed otherwise. for open comments the line can start with "@u/user1, @u/user2:" then the thread is assigned to user1, user2. they are expected to respond on it. otherwise the open threads are assigned to the author of the solution. the last comment in each thread is the always actual summary line for the thread.

the summary lines add some overhead to commenting. it has an upside though. it makes the reviewer think hard about the point they want to make and forces them to formulate it succinctly. less fluffiness. and then the author can list all open threads and show their summaries. i think such overview could be quite nice.

as a general idea, i'd expect most comments to be in this form:

  code:
    sum := 0
    for i := 0; i < len(vs); i++ {
      sum += vs[i]
    }
  comment:
    wdyt about using a range loop here?
    range loops are shorter and more idiomatic.
    like this:

      sum := 0
      for _, v := range vs {
        sum += v
      }

the first line of the comment ends in a question mark so the comment is open and is assigned to the author automatically. "wdyt about" questions are a pretty nice way to provide feedback that still allows rejection without being weird about it:

  response:
    acknowledged, i'll keep range loops in mind in the future.
    i'll keep this code as is for now to save time for my other tasks.

the goal of the author would be to resolve all open threads one way or another. either accept the suggestion or challenge the reviewer. one can accept a feedback even without going ahead and fixing the code right away just like in the above example. the point is that one can learn a lot through such conversations.

the user prepares a batch of comments and publishes them in bulk. the comments are persisted as the user is writing them so that an accidental tab close doesn't result in lost data.

i'm well aware that i'm not making a very clear description of what i'm imagining. all this is more like notes for my future self. i hope this is enough text for him to remember the picture i'm imagining right now for the interface. and yeah, maybe all this is an overkill, but i always wanted to experiment with a review interface. looks like this project could be my ideal playground for this.

# edit: review incentives

what would be the motivational factor for someone providing a review to others? okay, here's an idea to gamify this. whenever a user opens someone else's solution, they will be required to answer two questions:

basically the reviewer has to leave a praising and a constructive comment. not sure how to phrase the prompt for the constructive one. i want to ensure something meaningful could be written almost everywhere. e.g. "the solution looks solid. perhaps i'd add more clarifying comments or would use clearer variable names." or something like that. and the point for the praise is to ensure the feedback has some nice stuff in it so the participants keep coming back to hear nice things about their work.

while beginners would focus just on finishing the solutions, giving reviews could be the attraction for the more experienced participants. i'm not fully sure how fun this would be but it's something that could be experimented with.

# limits

this service could be quite cheap to run. all it needs is to serve some data files, and allow users upload code and comments. ratelimits should be enough to keep people in check.

each comment is at most 2k bytes long. the reviewing platform is not a soapbox. the reviewers should keep them short or split the thoughts up into smaller points.

and there would be also limits on how many event rooms can be running simultaneously. this can be low as 10 to keep the service lean and fast. if you want to make sure you have a room at the scheduled time, you can preserve your room for the given time. and if there are too many reservations already, then you can contact the service's admin to manually preregister the event. this extra process is needed only to keep spam and nasty robots at bay.

# feasibility

this post is getting quite long so i omitted a lot of implementation details. e.g. how would the problem input-output data served or how the comment notifications work. but these are probably not hard problems, i'm sure one can figure them out during the implementation.

i think all this would be relatively simple to implement. especially in go. it is completely multiplatform, so it would be easy to support multiple platform like linux, windows, mac out of the box.

maybe one day i'll go and implement this and can try it among friends. or maybe it could be used to teach coding for novices. the realtime feedback system could be immensely conductive for learners who thrive in that environment. so maybe i could try organizing a "let's practice coding" meetup and use this there.

anyway, i still have a few things queued up so i'm not getting to this anytime soon. maybe later.

# edit: measurements

printing the stable memory and cputime footprint when comparing solutions could be a nice motivating factor for looking into improvements.

so in the meantime i've learned about facebook's "hermit" software. it's like rr but supposedly better: https://developers.facebook.com/blog/post/2022/11/22/hermit-deterministic-linux-testing/. basically it can run software deterministically.

the nice thing about deterministic runs is that even the time measurement is deterministic. so this piece of software could be used to measure the tiniest changes in runtime. it could be used to deterministically rank by performance. per https://news.ycombinator.com/item?id=33712414 this is still a bit in an early phase. but i think for the simple usecase from this post this would work perfectly.

if possible, coderate should detect if hermit is present in the system and if yes, use it to run the software rather than running it directly.

# edit: problems

i expect to have a bunch of easier tasks for beginners and a bunch of complex problems for more experienced folks. if i'm ever looking for a high quality problemset with harder tasks, i shall look at https://cses.fi/problemset/list. it's pretty good and has a creative commons license. only downside is that i don't see the testdata being available but nevertheless it's a good start.

# edit: stretch goals

some additional feature ideas:

published on 2022-09-03, last modified on 2023-04-03


posting a comment requires javascript.

to the frontpage