The pricing for S3 is available here.
It’s worth noting that if you’re eligible for AWS free tier you will receive 5 GB of storage for free during your first 12 months.
Make sure you have created an AWS account and added billing information before attempting to make an S3 bucket.
Once on the AWS dashboard at the top search for “S3” and go to the S3 dashboard for your region.
After that go to “Create bucket” and add a unique name for your bucket like my-notes
. Make sure your region is set to your correct region. You will want to have “Block all public settings” ticked.
I would recommend turning off “Bucket Versioning” as Joplin has “Note History” built into the program which will allow you to easily recover your documents. You can turn on Bucket Versioning if you are worried about application failures with Joplin and syncing across devices, however just be careful about the added expenses. You can change this at any time.
I would recommend turning on “Default encryption” for your bucket also. Even if you have end-to-end-encryption enabled in Joplin it’s always a good idea to enable it for the S3 bucket. It also doesn’t cost any extra and any processing delays will be negligible.
Once your bucket is created you must generate IAM credentials that have access to your bucket. Before you can do this however you must create a policy which restricts access to only your bucket.
On the AWS dashboard search for “IAM”. You should get a screen similar to the one below. Once here click on “Customer managed policies”.
Once there click “Create policy” and complete the following steps in the “Visual editor”:
After adding the service and adding actions
The add ARN for bucket
The add ARN for object
Once these steps are completed click “Review policy”, give it a name such as joplin-my-notes-policy
and click “Create policy”.
Now with your policy you can finally generate credentials for your devices. I would recommend that each device should have their own set of credentials/user so you can revoke them easily and also not rely on one key.
To create a user search again for “IAM” and once opened select “Users” instead this time. Click “Add user” at the top and give it a name like joplin-desktop
and tick “Programmatic access”
From here you need to add the policy you just created to your user. Click “Attach existing policies directly” at the top then click “Filter policies” and select “Customer managed”.
Your custom policy from before should be listed and from there you can select it and click “Next” at the bottom. From here you can click “Next”, “Review” and then “Create user”. Once the user has been created your credentials (Access key ID and Secret access key) will be listed.
You should now have all the information you need for Joplin.
Open Joplin and click Tools -> Options and open the Synchronisation tab. From here fill it out like below.
Now your Joplin should be syncing with your S3 bucket. After syncing you should be able to refresh your bucket and see all your Joplin files and notebooks there.
]]>Before you follow this guide note that the migration script which generates an Evernote backup of your Keep files so you can export them to Joplin is written for Python 3+ and you will need Python 3+ installed on your system.
That’s right, you need a script to convert the Keep data you download to a format that Joplin can import (Evernotes ENEX format) so you can view your notes in Joplin.
A second thing to note is that by default if a note is unnamed then it will simply be titled after the date it was first made instead of the first line or the start of the content of the note. I suppose someone could rewrite the script to detect when a date is a title and if so then rename the title after the beginning of the content… I had ~300 untitled notes that I manually sorted and deleted after importing but this process can be really time consuming.
You need to export your notes from Google Keep, fortunately Google provide a service called “Google Takeout” which allows you to export your data many of their services (Keep included). Head over to https://takeout.google.com/settings/takeout and you’ll get a screen that looks similar to this:
You want to make sure you press “Deselect all”
Then scroll down and find “Keep” and tick it
Before finally creating your export with these parameters
You should shortly receive an email with a link to download your data. Google will make you verify your account with your password before your zip/tgz can be downloaded.
Once you have downloaded and extracted your files download a copy of my modified version of the keep-to-enex script from here: https://gist.github.com/itsjfx/689ae620222240911a3efae33e313b1b. The original version is available here https://gitlab.com/charlescanato/google-keep-to-evernote-converter but it didn’t work for me so I added a modification that allowed it to work.
Put that script in the folder before the Keep/
folder, which should be in Takeout/
. After that run the script with python keep-to-enex.py -o output.enex -f "./Keep"
- where output.enex
will be the name of the file generated and “output” will be the name of the notebook created from your backup. Feel free to rename it to something such as “Personal.enex” instead.
Now open Joplin and go File -> Import -> ENEX - Evernote Export File (as Markdown) and select your ENEX backup file you just generated.
Once this process is complete a notebook with the same name as your backup will be created and all your notes (and images) will be imported.
]]>As for what transactional email is, Postmark provides a good definition on their website:
Transactional email is typically a unique, high-priority message sent to a single recipient. They are often triggered by something a user does or doesn’t do.
This is different to marketing/bulk/broadcast email:
[Broadcast email] is sent to multiple recipients at once. Things like product update announcements or terms of service notices are examples of broadcast emails.
If a user does not receive their transactional email then this will most likely result in a bad user experience.
Imagine this scenario: Your website requires a users email to be verified before they can purchase a product. Now imagine a user cannot verify their email for their account on your website as they never receive a verification email. This results in the user leaving your website empty handed, unable to buy their desired product. As a result, the user will is unlikely to use your website again due to their bad user experience.
This scenario turned out to be a reality for me.
Sending transactional email just sucks because there’s a lot that can go wrong. Using an email delivery provider which is blacklisted by major email hosting providers is another thing to add to the list.
At SteamLevels we were looking for an email sending provider which could reliably send emails to our users for two cases:
We never send any spam or promotional mail, and our users may only ever receive 2 emails in their lifetime for their account. Several headaches were endured before we finally settled on a provider which offers this service with great success and reliability.
When we first started having issues with email delivery this was the first idea we had to solve the issue. If other people are ruining our reputation due to us sharing an IP with them, why not just move to our own IP? We quickly discovered that due to the low volume of emails we send (~5k a month) moving to a dedicated IP would create a new set of issues.
This is due to IP reputation which is a core part of email delivery, Postmark have a good blog post on the subject where they discuss dedicated IPs better than I could.
Because of the low volume of emails we send, we were forced to continue looking for an email provider which had reputable shared IPs.
We began by using Mailgun for sending our transaction email (whom promote themselves as a “Transactional Email API Service For Developers”).
We quickly discovered that Yahoo and AOL blacklisted Mailgun, and as a result none of our customers under those hosts could receive our verification emails. This resulted in us losing customers – and we discovered that yes, people still use Yahoo and AOL for their email.
Yahoo/AOL and Mailgun seem to have a hateful relationship that even Mailgun wrote a blog post about it. None of the suggestions in the post deemed useful as we rarely sent emails to these providers and they still blacklisted us, so it was off to another provider for us.
This was when we went to SendGrid. Everyone loves to talk about this provider, so it was the next logical one to try. Upon switching, we noticed we were able to send emails to several major email providers (Gmail, Sendgrid and Outlook) which was a call for celebration – yay… but not for long.
It wasn’t long before we noticed that only ~93% of our emails were being sent to our users. With further investigation, we discovered yet again that our users email providers were blacklisting our SendGrid IPs due to spam being sent from them from other customers.
We even went to the effort of contacting a German postmaster. They responded with:
Your provider has added you to a pool of senders that regularly sends spam to our customers and to our own spamtrap addresses.
At this point we were extremely unhappy with SendGrid and had an extended support case. They informed us they are unable to move their customers to other shared IP pools manually as it gives the impression of “snowshoe spamming”.
Their suggestion was for us to pay for a dedicated IP (an extra $75 a month). Upon further questioning to whether a dedicated IP is appropriate based on the number of emails we send – their support staff said that they would “normally not recommend” having a dedicated IP but it would be the most “decisive way” to fix the issue.
They also suggested us to go contacting each postmaster to whitelist our domain.
We were not pleased with their support so we went looking again on the market.
We finally discovered Postmark after finding their Why Postmark? page on Google which got our attention. At this point I was tired of transferring email templates across providers and rewriting code to use a new API, but just like Goldilocks the third time was the charm.
Before being allowed to use Postmark, we had to fill out a form describing our reasons for using their service, and we were manually approved very quickly. This is so their service is only used for transactional email, and so they can maintain their high reliability and deliverability rates.
They have wonderful support and offer to talk to you face-to-face when you first sign up in order to make sure you are getting the best service possible.
They also send weekly digests about your deliverability so you can keep track of whether there’s issues with your service. On top of this, they offer a free DMARC monitoring service which provides a human-readable summary of your DMARC reports.
Postmark also provide dedicated IPs for an extra $50 per IP per month, and offer automated warmup of these dedicated IPs to ensure your IP reputation is up to scratch. More information on their dedicated IP offerings is available here.
Enough with the sales pitch and back to our experience using them. Over 3 months worth of emails (~12k emails) only 7 emails were flagged as SPAM (all to a single Russian email host). We had a bounce rate of 1.8% which is mostly due to mailboxes not existing (user typos) or mailboxes being over-quota.
It’s safe to say that with these rates we are very pleased with Postmark and will not be going anywhere.
That said, there’s other email providers out there which could offer a better service depending on your needs. If your application uses Amazon Web Services (AWS) it may be worth looking into Amazon’s Simple Email Service (SES). They have reduced pricing if you send emails from other AWS services (e.g. Lambda, EC2). More information on their pricing is available here.
I personally haven’t had to send any marketing or bulk email, but Postmark is now offering “broadcast messages” (bulk emailing) through their “message streams” service. More information is available here. Since Postmark is so good at sending transactional email, I have high hopes for their bulk/broadcast email service too – and would encourage anyone eager to send marketing email to investigate.
Otherwise due to Amazon’s flexible pricing system and cheap dedicated IP offerings it may be worth considering SES as you can pay per email sent and because of this you can be more flexible with your email budget.
I have also heard good things about Mailchimp, Postmark also previously recommended them for sending bulk emails.
Due to my previous experiences with SendGrid and Mailgun I would not recommend them to anyone. Perhaps SendGrid for bulk emails, but be wary of your delivery rates with their service.
If you’re interested in learning more about email reputation Mailjet has a good article on the subject.
]]>So you got your nginx server setup (hopefully) and it’s serving your files (or being an effective reverse proxy), but maybe you’ve noticed loading static content (images, media, etc) load slowly… or maybe you realised that anyone can get the IP address of your website and therefore the box which is hosting it, and that creeps you out a bit. Or maybe your website doesn’t have that padlock everyone else’s has and it’s “not secure”. Luckily all these issues can be solved by Cloudflare! Or maybe your website is going through Cloudflare but sometimes you notice your origin server is still publicly displaying the website, and you want to stop this leak.
Unfortunately setting up Cloudflare for your website may seem simple, but setting it up securely and correctly to the inexperienced can be a difficult task. Although you may think your website is not leaking your origin IP address, it’s certainly possible that it is. This article will demonstrate how to secure your website through Cloudflare, and provide sample configurations to a hybrid nginx server which can be secure whether some vhosts are utilising Cloudflare and others aren’t (non-Cloudflare).
This is not a complete guide on how to setup nginx for speed or anything like that, please just rely on this for securing your vhosts and origin server.
This guide assumes you’re using the mainline version of nginx which will mean your sites are located in /etc/nginx/conf.d/
and your nginx user is nginx
. To install the mainline version of nginx which is recommended by the nginx team follow this guide here - just remember this guide will not work with any version of Ubuntu except for 18.04 UNLESS you set the release name from bionic
to your release name which you can find on Google. Maybe I’ll write my own guide :)
This is a step that many guides on the internet do not do and instead they ask you to get a Cloudflare Origin TLS certificate for all your sites. While this is good practice because you can use Full (Strict) mode for your SSL on Cloudflare, it’s not good to implement if you wish to conceal the identity of your origin server.
The reason why is because of this screenshot below.
As you can see although no content is sent, the origin address of the website is exposed by the certificate being sent to the user. A simple web scraper could check the DNS of the certificate and easily expose the origin servers IP address.
This is a Cloudflare and nginx website I setup where the default_server block will send a Cloudflare Origin TLS Certificate and required Authenticated Origin Pulls. Don’t worry if you don’t have these setup, these are the next steps in the guide (Authenticated Origin Pulls). For reference this was following the DigitalOcean nginx+Cloudflare guide
Well that’s out of the way, here’s how to generate it. You will need to make sure you have openssl installed on your system.
This will generate a public and private certificate that will last for 15 years. Feel free to extend it.
sudo openssl req -x509 -nodes -days 5475 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.pem -out /etc/ssl/certs/nginx-selfsigned.pem
It will ask you for some values, just keep them as the default and maybe change your country name if desired. We will use this later in our nginx setup.
For this guide I’m assuming your nginx configurations are stored in /etc/nginx/conf.d/ mainline branch, otherwise you can follow along with /etc/nginx/sites-available/
Nginx will give us a default server file which will give you the following output once viewing your website:
This file (default.conf) can be rewritten to default.old (if in conf.d) - or removed from your sites-enabled folder, and should be replaced with this new file:
sudo nano /etc/nginx/conf.d/default.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 403;
}
server {
# SSL configuration
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.pem;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.pem;
server_name _;
return 403;
}
Then type sudo nginx -t
to make sure your configuration is correct before typing sudo systemctl restart nginx
to restart nginx to respond to the configuration change.
This provides us with a base setup. If a request comes to the origin server and it does not have a server block it will come back as 403. Try test it out by going to the IP address of your box on http:// and https://
After this we can actually configure a host we would like to route properly through Cloudflare and nginx (yay)! aka something that isn’t our default_server.
For the sake of this tutorial I’ve made a /var/www/
folder and have my nginx pointed to /var/www/test/
- the config is below.
If you want to follow along, make sure you run these commands.
This little micro step is skippable if you have a host working already serving content. If you wish to skip click here to 3.2.
mkdir -p /var/www/test/
sudo chown -R nginx:nginx /var/www/
sudo find /var/www -type f -exec chmod 664 {} \;
sudo find /var/www -type d -exec chmod 775 {} \;
sudo find /var/www -type d -exec chmod g+s {} \;
I actually copied most of these perms from this guide. Essentially:
Reminder if your nginx user is www-data the chown command should have www-data:www-data instead of nginx:nginx
Make sure your user is in the nginx (or www-data) group, by typing groups USERNAME
. If you cannot see the group listed, run:
sudo usermod -aG nginx USERNAME
and restart your shell.
This will make the required folders and give nginx ownership of them. Essentially we are setting up our /var/www/ environment to be able to serve a website.
Below is a sample of a nginx config that will simply display the static content of the website. If you have an existing host make sure you have any missing fields (namely the SSL certificate ones) - and the include file.
Below is a few optimisations we can add to nginx to increase speed but also increase its security.
TLS 1.2 and TLS 1.3
are used between your site and your visitors (Cloudflare in this case) - resulting in a secure transport.You will need to make an includes folder sudo mkdir /etc/nginx/includes
then run this command to generate dhparams sudo openssl dhparam -out /etc/nginx/includes/dhparam.pem 4096
Then add this conf file:
sudo nano /etc/nginx/includes/common_opts.conf
resolver 1.1.1.1 1.0.0.1 valid=300s;
resolver_timeout 10s;
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript image/svg+xml;
gzip_disable "MSIE [1-6]\.";
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
# use a dhparam
ssl_dhparam /etc/nginx/includes/dhparam.pem;
Here’s our base configuration file, make sure we import the conf file from the includes folder above.
sudo nano /etc/nginx/conf.d/test.conf
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.pem;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.pem;
include /etc/nginx/includes/common_opts.conf;
server_name testing.jfx.ac;
location / {
root /var/www/test
index index.html;
}
}
As you can see the config has no HTTP support, this is because Cloudflare can do automatic HTTPS rewrites, so why bother. Feel free to add this block in if you wish to have this functionality though.
server {
listen 80;
listen [::]:80;
server_name testing.jfx.ac;
return 302 https://$server_name$request_uri;
}
Here is a friendly Hello World for HTML so we know it’s working :)
nano /var/www/test/index.html
<html>
Hello World!
</html>
And this will display our testing page when we go to our site, yay!
As for why this is important, we can see that it appears the site is only serving the data if the user is going to the correct domain https://testing.jfx.ac
- no way the origin can be exposed right? This is wrong. With a simple curl command it is easy to bypass the load balancer by spoofing our Host
header.
jfx@PC:~$ curl --insecure --header 'Host: testing.jfx.ac' 'https://167.172.213.53/'
<html>
Hello World!
</html>
Scary stuff, what if someone was going through IP ranges and found our origin server this way? There’s actually a few ways to stop this, but the easiest way to stop this is to implement Authenticated Origin Pulls! More information is here from Cloudflare themselves. You can also block all HTTP/HTTPS traffic to your box unless it’s from Cloudflare using a firewall (such as UFW), this step is also explained but there are pros and cons for this which are listed below.
Method | Pros | Cons |
---|---|---|
Authenticated Origin Pulls |
|
|
Firewall Blocking |
|
|
Firstly, make sure this feature is enabled on Cloudflare or the following steps will break your site. To enable it, go to Cloudflare and go to SSL/TLS -> Origin Server -> ON for Authenticated Origin Pulls:
Next to setup Authenticated Origin Pulls on nginx, go here and at the bottom of the page download the origin-pull-ca.pem
file. Once downloaded, copy its contents and output it to /etc/ssl/certs/cloudflare-origin.pem
sudo nano /etc/ssl/certs/cloudflare-origin.pem
*contents of origin-pull-ca.pem*
Once that’s done, go back to your nginx config and add this to your SSL server block.
sudo nano /etc/nginx/conf.d/test.conf
ssl_client_certificate /etc/ssl/certs/cloudflare-origin.pem;
ssl_verify_client on;
It should look like this
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.pem;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.pem;
ssl_client_certificate /etc/ssl/certs/cloudflare-origin.pem;
ssl_verify_client on;
include /etc/nginx/includes/common_opts.conf;
server_name testing.jfx.ac;
location / {
root /var/www/test
index index.html;
}
}
A note: you do not need to use Cloudflare’s generated SSL certificates if you wish to use Authenticated Origin Pulls.
Don’t forget to run sudo nginx -t
and sudo systemctl restart nginx
to reload your config!
If we go to our website, we won’t notice a difference, but let’s try this curl spoof again:
jfx@PC:~$ curl --insecure --header 'Host: testing.jfx.ac' 'https://167.172.213.53/'
<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx/1.19.1</center>
</body>
</html>
Mad stuff, now we have protected our site from people trying to grab our origin IP address!
There is a catch though, a 400 will return on any page where something is trying to spoof, but a 403 will return on the IP without a Host header. This gives away the fact that we are trying to be safe and only accept Cloudflare’s requests. A solution to this problem is simply adding the ssl_client_certificate
and ssl_verify_client
from above also to default.conf. With these, the error being returned will be consistent with that as the one shown above. To do this, edit default.conf and add those two lines in so it looks like this:
sudo nano /etc/nginx/conf.d/default.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 403;
}
server {
# SSL configuration
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.pem;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.pem;
ssl_client_certificate /etc/ssl/certs/cloudflare-origin.pem;
ssl_verify_client on;
server_name _;
return 403;
}
With this there will be no way to determine whether or not the web server is trying to hide itself with a client certificate under a specific host.
It is possible using iptables or ufw to block all web traffic coming to your server unless it’s from Cloudflare. Personally I think for a production environment this is the most safe and most bulletproof approach, as if you expect all your web traffic to come out of Cloudflare - there’s no benefit from having a public facing web server. From a security standpoint this is also the most effective. Whilst also enforcing the technique above (Authenticated Origin Pulls) you can add more security if you’re paranoid like me.
The issue with this if you wish to have a host not go through Cloudflare, then this will not be possible (unless you whitelist visitors IPs).
I have a handy script located here which will grab Cloudflare’s IP range list and output a file for use with the Real IPs nginx module. This script can also optionally add UFW rules for allowing Cloudflare IPs for https and http. If your UFW is set to block all incoming, then this will only allow incoming requests on ports 80 and 443 to Cloudflare IPs. I’m not going to write up how to setup UFW, there’s a nice DigitalOcean tutorial here. Just a word of caution: make sure you’ve disabled any existing HTTP/HTTPS rules before relying on this script. In the scripts README it tells you how to run it as a cron job so you stay on top of any updates to the Cloudflare IP range.
Make sure you run the script as root and edit the UFW_RULES=false
line to UFW_RULES=true
I’m going to break the discussion into two sections:
I have a handy script located here which will grab Cloudflare’s IP range list and output a file for use with the Real IPs nginx module. This is also the same script used in the traffic blocking section of the security section below, so if you wish to do both read on!
To install the README does a good job of explaining, but essentially if you run this script it will start working like magic! You will need to automate this script (crontab recommended) to keep track of the Cloudflare IP range. A guide on how to is in the README for the script.
Not related to nginx specifically (hence why it’s down here), but these are settings which are good to have on for your website in Cloudflare. Go to SSL/TLS->Edge Certificates to enable them:
Feel free to turn HSTS on for subdomains, I have it off since I do testing stuff on some of my subdomains, but it’s good practice to have on!
You should be disabling security tokens on your nginx configuration which stops your nginx version being published on your website (typically shown on error pages). To do so go to your nginx.conf
file which should be located at /etc/nginx/nginx.conf
and add this line in the http
block.
sudo nano /etc/nginx/nginx.conf
http {
server_tokens off;
...
...
}
After restarting nginx sudo nginx -t and sudo systemctl restart nginx any “bad” page or 404 error, etc, should now look like this:
This one goes without saying, and while this guide doesn’t aim to be a reverse proxy how-to - it’s worth mentioning. If you reverse proxy a service through nginx, you most definitely should block it’s traffic (the port your reverse proxying through nginx) using a firewall such as UFW. I’m not going to write a guide on how-to do all this, but check out the DigitalOcean tutorial here on how to get started with ufw. Just make sure your web server is accessible when you enable ufw, which is mentioned how to do in the article.
This is my first blog post so I don’t know how well this will be received, but hopefully this helps someone setting up nginx and wanting to be security conscious.
]]>