# comments: i've enabled anonymous commenting on this site

# summary

# markup

the comments support some markup based formatting. a paragraph's first character determines how it's going to be formatted:

web links are deliberately not supported. i'm allowed to use them in my comment responses though.

# motivation

one thing i really like on blogs: having the ability to leave a comment. i like reading some challenging responses to the blog posts. and sometimes i feel i really want to point something out to the author. ideally i could do this without dealing with account management and all that.

on the other hand low quality comments can really ruin this section. i definitely don't want to deal with spammers and low effort comments. to combat these aspects, i'll open anonymous comments with very heavy limitations.

first, the commenter will need to wait 1 minute between crafting their message and committing it to my site. i've talked about this idea in @/cooldown. i hope that during the cooldown the user will either not bother posting the message if it's not that important, or have a chance to review to make it even nicer if needed. although a change will cost them additional 1 minute. i hope only the really important messages get posted here, where the message is definitely worth the 1 minute wait.

i also ratelimit the incoming comments to a few messages per hour. i don't think anybody will post here as i don't think i have readers but i definitely don't want to have a firehose of messages even if i have a popular post. this limit should be fine for my initial experiment.

these limitations will defeat the automated comment spam but won't stop a dedicated attack on my site. if it comes to that point, i'll just simply kill this experiment.

# implementation

the backing store for the old comments is just a @/actionlog file in the blog's git project. this allows me to get the comments into the static archive too. the actionfile has the following format:

  [unix_millis] comment [postname] "[message]" "[response]"

my server keeps all the comments in memory and serves from there. but new comments are persisted in a cloudflare kv store until i move them to the actionfile manually. once the server sees that a comment is in the actionfile, it will delete it from the kv store. all this should be transparent for the users though.

the markup is quite restrictive but in exchange it's super easy to implement in most languages. i have the same implementation in both javascript and go: split the message on "\n\n" and then look at the first character of each paragraph to decide what html tag to wrap it and what additional processing it needs.

and the cooldown part is implemented by the mechanism i proposed at the end of @/cooldown. when the user presses preview, their browser computes the message's sha2-256 and sends that to my server. my server responds with a token that was generated by hashing timestamp+commentid+user_hash+secret_salt+cooldown_timestamp. when the cooldown elapsed and user presses post, the user's browser sends that token along the comment. the token includes the timestamp so the server can verify that the message was something that the user uploaded before without storing any state for this.

security is not really the point of this. it's just obscure enough to guard against simple spammer bots and make it not-so-simple to bypass the cooldown.

in case you want to contact me privately, send an email to qxp2fs8j at anonaddy.me.

anyway, comment away!

published on 2022-08-01, last modified on 2023-12-12


comment #1 on 2022-08-01

this is how a comment looks like.

comment #1 response from iio.ie

and this is how a response from me looks like.

comment #2 on 2022-10-08

Globally rate limiting across the whole site discourages commenting, especially when you post 6 post at once, each of which takes less than 10min to read, digest, and formulate a comment for.

comment #2 response from iio.ie

i started out with allowing 1 comment per hour because i wasn't sure what to expect after opening up to the internet. but looks like nobody is trying to spam me so i bumped to limit to 4 comments per hour. i still want to keep the global limits because makes me sleep better knowing that i won't get gazillion messages overnight.

but now i also added alerts to notify me if this new limit is ever reached. then i can reevaluate again if i need to further adjust the rules.

comment #3 on 2023-02-23

I have a question I've been breaking my head with, probably with a simple answer if not for my lack of good understanding of the web: how do I ensure that the server responds ONLY to requests coming from my site, and no-one else?

In your implementation, when I click preview, my browser sends a message to your server, and your server responds with a token, that will allow me to post the message. On the server-side, how do you ensure that the request from my browser is coming from your website only?

How does your server know that this fetch request:

  fetch('/commentsapi', {
    method: 'POST',
    body: `sign=${msghash}`,
    headers: {
      'content-type': 'application/x-www-form-urlencoded',
    },
  })

is coming from your website, and not from any random terminal doing a CURL request with the appropriate request & headers to look like your website?

I am not nitpicking on the security of the implementation here, I just want to understand how to implement this check, as I wasn't able to find any simple answer on the internet. This says it's impossible: https://security.stackexchange.com/a/246442.

I'd appreciate it lots if you could point me to some reading that will help me understand this.

comment #3 response from iio.ie

On the server-side, how do you ensure that the request from my browser is coming from your website only?

i don't know that what tool you post your comment with. you could totally post a comment from curl too if you curl the signature before you send it back via a second curl call.

one protection i can do is to look at the origin http header. someone could create a page which on visit would post an anonymous comment on my page. but then the origin will be the other site's domain so i can prevent that by dropping requests with unexpected origin header.

but this won't prevent someone just posting something from the terminal where they can set any header. from the terminal you can do a denial-of-service attack against me. all requests will come from a single place. with putting some javascript contacting my page on some popular page you could mount a distributed-denial-of-service attack against my site. then any visitor of your page would try to contact my site. but the origin check can easily defend against this attack.

there are other approaches described at https://en.wikipedia.org/wiki/Cross-site_request_forgery.

nevertheless, i'm having bit of a trouble with your question. sounds like a xy problem (https://en.wikipedia.org/wiki/XY_problem). what feature are you trying to implement in your service?

comment #4 on 2023-02-23

Thanks for the fast response! The CSRF wiki describes some interesting solutions, thanks.

Ha, could be an XY problem. I'm still learning lots on how the web works. I have a website, https://aristot.io which is currently pure client-side and I want to add a commenting system to it. I don't want to rely on 3rd party, and I thought it would be a good learning project to make a simple backend for this. I am learning to create APIs with Node - seemed to be a worthy tool to invest time learning - and intend to get some basic server hosting somewhere like digital ocean and add a SQL database (probably overkill, but good for a learning project) to store comments, manage newsletter subscribers, and whatever else arises.

But I can't think of how would I ensure for example that only someone from my website would be able to make requests to my API. I don't want my DB to get spammed with data unrelated to my site. Am I thinking about this wrongly?

P.S. I love the cooldown concept :)

comment #4 response from iio.ie

[...] only someone from my website would be able to make requests to my API. I don't want my DB to get spammed with data unrelated to my site.

the former won't protect against the latter. i could go to your website, open the web console, and spam you with requests which would genuinely seem to come from your website.

let me try a metaphor. your service is a mailbox, the browser is the postal service that delivers mail and packages to your post. the postal service will ensure that it only delivers legit mail and packages. but how do you prevent your local kids putting trash, flyers or real looking mail into your mailbox? if you want to keep it open to all, you can't really do anything against this. anybody will be able to put stuff into your mailbox. similarly, anybody will be able to send you random requests, even real looking ones.

you could lock your mailbox down and give the postal services the key. similarly you could only allow registered users to post. then perhaps you don't even need to care where the request is coming from.

nevertheless, if you want to protect your db, you need to implement a ratelimit. at the moment here on my site i only allow 4 comments per hour and so far the limit wasn't reached. even honest users might inadvertently spam you if some post gets wildly popular and the commenters start a heated discussion among each other. the ratelimit would not only protect your db but also cool down the comments a bit.

i would also argue to not try to do such lockdowns even if it would be technically feasible. innovation is frequently about combining things unexpectedly. maybe your comment section gets popular and a user wants to write an app or extension to make the commenting section better. the extension could notify the users about new comments and allow quick responses. or maybe someone wants to create some statistics about the comments and needs to access your content programmatically. why prevent people trying to improve your site?

comment #5 on 2023-02-24

Okay, yes I see your point.

Last thing that's still dark in my head:

maybe your comment section gets popular [...] why prevent people trying to improve your site?

If anyone has access to the API, and my website's code is open-sourced doesn't that mean that in theory someone can just clone my site, publish it under some domain/host, and then use my backend server to host the comments of *their* website? How do backends in general prevent this kind of abuse? Ratelimiting would slow-down, but still, anyone would be able to use my backend as free storage/db. Or?

comment #5 response from iio.ie

checking the origin or referer header would prevent that. the users of that clone-site wouldn't be able to talk to your site if you do that. this check is like two lines of code on the server side. :)

but if someone creates a tool or browser extension to talk to your site, then yeah, they could use it as a storage. adjust your limits in a way that it makes this impractical. or just limit commenting to registered users if you are concerned about unintended usage. if some user misuses your service, you simply ban them. i think banning bad users is what most sites do.

but then make sure the registration is somewhat limited (asking for money is a good deterrent) and that the new users cannot immediately spam you. coming up with the right rules is a rabbit hole. it's best to not worry too much about it initially. just adjust over time based on the actual user behavior you see.

posting a comment requires javascript.

to the frontpage