This course is still being built. Content will change. Get updates on the mailing list.

Deploying the newsletter feature

You'll learn
  • Creating a Postgres database in AWS Lightsail and migrating it with our database schema.
  • Creating an AWS SQS queue using a CloudFormation template.
  • Connecting the app to our new database and queue.

Since we deployed the app the last time, a few important things have happened that are relevant to our cloud setup:

  • We've added a database to save our newsletter subscribers.
  • We're using AWS SQS as a job queue.

To be able to deploy our app again, we need to set up these resources, and that's what I'll show you in this section.

We'll be doing that using a combination of commands for your command line, and where possible, AWS CloudFormation. Because Lightsail doesn't provide an SQS queue, we'll need to go into the main AWS offerings to get one. The easiest way to set one up is by using a CloudFormation template. That's just a configuration file that describes the resources we need in the cloud.

But first, another disclaimer:

Important: The cloud still costs real money

I want to mention again that the cloud costs money. The smallest Postgres database in Lightsail is 15$ / month, but it's possible to be billed a higher price if you for example use a lot of traffic to and from the database. You probably won't, following this guide, but keep an eye on your costs.

Luckily, SQS only costs something when you use it actively sending messages, so the cost will be near zero for our use case.

I'll show you how to tear everything down again, so you don't have to keep on paying for it after you've finished this course.

The code:

$ git fetch && git checkout --track golangdk/newsletter-deploy

See the diff on Github.

Setting up the database

We'll be using the aws cli again to set up the database, because it's very easy that way. If you'd rather do it through the web interface, feel free to do that instead.

Setting it up is one (long) command. Before you run it, make sure that the --availability-zone argument is set to one in the region where your app is running. You can get a list using:

$ aws lightsail get-regions --include-availability-zones

So if your app is running in eu-central-1, you can choose eu-central-1a for the availability zone.

Also make sure to set the database password to something else than 123, and note it for later. The command to set up the database is then (using the availability zone eu-central-1a and the password 123):

$ aws lightsail create-relational-database \
--relational-database-name canvasdb \
--availability-zone eu-central-1a \
--relational-database-blueprint-id postgres_12 \
--relational-database-bundle-id micro_1_0 \
--master-database-name canvas \
--master-username canvas \
--master-user-password 123

It'll take the Lightsail system a while to set up the database, so let's continue with preparations to migrate the database.

Migrations

Remember the talk about migrations in production when we introduced the database earlier? We want to be able to control when we run migrations, separately from deploys. Ideally, this would be totally separate from the app, perhaps running as a task that migrates and then stops, in the same computing environment as the app.

Unfortunately, Lightsail doesn't allow running a single task (that doesn't continue running) easily. So to make things easy for this course, we will instead add two small handlers in our app that we can call to migrate the database. We will hide these handlers behind some basic authentication, so not everyone can call them.

handlers/migrate.go
package handlers import ( "context" "net/http" "github.com/go-chi/chi/v5" ) type migrator interface { MigrateTo(ctx context.Context, version string) error MigrateUp(ctx context.Context) error } func MigrateTo(mux chi.Router, m migrator) { mux.Post("/migrate/to", func(w http.ResponseWriter, r *http.Request) { version := r.FormValue("version") if version == "" { http.Error(w, "version is empty", http.StatusBadRequest) return } if err := m.MigrateTo(r.Context(), version); err != nil { http.Error(w, err.Error(), http.StatusBadGateway) return } }) } func MigrateUp(mux chi.Router, m migrator) { mux.Post("/migrate/up", func(w http.ResponseWriter, r *http.Request) { if err := m.MigrateUp(r.Context()); err != nil { http.Error(w, err.Error(), http.StatusBadGateway) return } }) }

The MigrateTo handler is used when migrating to a specific version. You can use it when you want to migrate one step at a time, or roll back to a previous version.

MigrateUp doesn't take a version, and simply migrates to the newest version.

The associated storage methods are:

storage/database.go
package storage import ( "context" "embed" "fmt" "io/fs" "time" _ "github.com/jackc/pgx/v4/stdlib" "github.com/jmoiron/sqlx" "github.com/maragudk/migrate" "go.uber.org/zap" ) // … //go:embed migrations var migrations embed.FS func (d *Database) MigrateTo(ctx context.Context, version string) error { fsys := d.getMigrations() return migrate.To(ctx, d.DB.DB, fsys, version) } func (d *Database) MigrateUp(ctx context.Context) error { fsys := d.getMigrations() return migrate.Up(ctx, d.DB.DB, fsys) } func (d *Database) getMigrations() fs.FS { fsys, err := fs.Sub(migrations, "migrations") if err != nil { panic(err) } return fsys }

The storage methods use the embed package to include the migrations directly in the binary, which is checked at compile time. This is why we allow panicking on any errors getting the subdirectory with fs.Sub, because unless we've spelled "migrations" wrong, it won't error. Otherwise, they call the exact same migration functions as the migrate command we built earlier.

We'll add an admin password to our server to use in the authentication:

server/server.go
// Package server contains everything for setting up and running the HTTP server. // … type Server struct { address string adminPassword string database *storage.Database log *zap.Logger mux chi.Router queue *messaging.Queue server *http.Server } type Options struct { AdminPassword string Database *storage.Database Host string Log *zap.Logger Port int Queue *messaging.Queue } func New(opts Options) *Server { if opts.Log == nil { opts.Log = zap.NewNop() } address := net.JoinHostPort(opts.Host, strconv.Itoa(opts.Port)) mux := chi.NewMux() return &Server{ address: address, adminPassword: opts.AdminPassword, database: opts.Database, log: opts.Log, mux: mux, queue: opts.Queue, server: &http.Server{ Addr: address, Handler: mux, ReadTimeout: 5 * time.Second, ReadHeaderTimeout: 5 * time.Second, WriteTimeout: 5 * time.Second, IdleTimeout: 5 * time.Second, }, } } // …

Now we can add the new routes, using the basic auth middleware that comes with the chi router:

server/routes.go
package server import ( "github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5/middleware" "canvas/handlers" ) func (s *Server) setupRoutes() { handlers.Health(s.mux, s.database) handlers.FrontPage(s.mux) handlers.NewsletterSignup(s.mux, s.database, s.queue) handlers.NewsletterThanks(s.mux) handlers.NewsletterConfirm(s.mux, s.database, s.queue) handlers.NewsletterConfirmed(s.mux) s.mux.Group(func(r chi.Router) { r.Use(middleware.BasicAuth("canvas", map[string]string{"admin": s.adminPassword})) handlers.MigrateTo(r, s.database) handlers.MigrateUp(r, s.database) }) }

Don't forget to set a password in the command startup. Here, we get the password from the environment, but also set a default password to be sure:

cmd/server/main.go
// Package main is the entry point to the server. It reads configuration, sets up logging and error handling, // handles signals from the OS, and starts and stops the server. package main // … func start() int { // … s := server.New(server.Options{ AdminPassword: env.GetStringOrDefault("ADMIN_PASSWORD", "eyDawVH9LLZtaG2q"), Database: createDatabase(log), Host: host, Log: log, Port: port, Queue: queue, }) // … } // …

Setting up the queue

We need to set up the SQS queue outside Lightsail. To be able to do this, your user needs administrator permissions. We have to add this through the web console:

  1. Go to console.aws.amazon.com/iamv2 and log in if you need to.
  2. Find your user in the dashboard and click the big "Add permissions" button.

    A screenshot of the IAM console with the add permissions button.
  3. Find the policy called AdministratorAccess and attach it to your user.

    A screenshot of the IAM console giving administrator access to your user.

After this, you have full administrator access to everything in AWS through the command line. Take care that you don't share your API keys anywhere! If you're not comfortable with this, make sure to remove the administrator access from your user again after this section.

Now we can run the commands we need to set up the queue. The first thing we need to do is give the resources in Lightsail access to the rest of AWS. Replace the --region parameter with your region and run:

$ aws lightsail peer-vpc --region eu-central-1

Now we can set up the CloudFormation template to create the queue. The nice thing about this template is that you can extend it if you need to add more resources from AWS to your app in the future. Create a new file cloudformation.yaml at the root of your project:

cloudformation.yaml
Resources: AppUser: Type: AWS::IAM::User AppKeys: Type: AWS::IAM::AccessKey Properties: UserName: !Ref AppUser JobsQueue: Type: AWS::SQS::Queue Properties: QueueName: jobs VisibilityTimeout: 60 ReceiveMessageWaitTimeSeconds: 20 JobsQueuePolicy: Type: AWS::IAM::Policy Properties: PolicyName: JobsQueuePolicy Users: - !Ref AppUser PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Resource: !GetAtt JobsQueue.Arn Action: - sqs:GetQueueUrl - sqs:SendMessage - sqs:ReceiveMessage - sqs:DeleteMessage Outputs: AccessKeyId: Value: !Ref AppKeys SecretAccessKey: Value: !GetAtt AppKeys.SecretAccessKey

You don't need to know the details of the document, so just copy/paste without reading it if you like. With the template we create a new user and some API keys for that user, the SQS queue called jobs, and a policy to allow the new user to access the queue. It outputs the API keys we need to add to our app.

Bonus: Secrets

Now we can create the stack with:

$ aws cloudformation create-stack \
--stack-name canvas \
--capabilities CAPABILITY_NAMED_IAM \
--template-body file://cloudformation.yaml

It'll take a few minutes to create it. When it's done, run this to get your API keys, and note them for later:

$ aws cloudformation describe-stacks | jq '.Stacks[].Outputs'

Deploying the app

Now that we have a database and a queue set up, we just need to add our new configuration options to the app. Because the configuration string in our Makefile would get a bit long, we'll add them in a separate file instead. In the Makefile, change the deploy target so that it uses a containers.json file for some of the configuration:

Makefile
.PHONY: build cover deploy start test test-integration export image := `aws lightsail get-container-images --service-name canvas | jq -r '.containerImages[0].image'` build: docker build -t canvas . cover: go tool cover -html=cover.out deploy: aws lightsail push-container-image --service-name canvas --label app --image canvas jq <containers.json ".app.image=\"$(image)\"" >containers2.json mv containers2.json containers.json aws lightsail create-container-service-deployment --service-name canvas \ --containers file://containers.json \ --public-endpoint '{"containerName":"app","containerPort":8080,"healthCheck":{"path":"/health"}}' start: go run cmd/server/*.go test: go test -coverprofile=cover.out -short ./... test-integration: go test -coverprofile=cover.out -p 1 ./...

Then put this in the new containers.json file at the root of your project:

containers.json
{ "app": { "image": "", "environment": { "LOG_ENV": "production", "HOST": "", "PORT": "8080", "DB_USER": "canvas", "DB_PASSWORD": "{{your db password}}", "DB_HOST": "{{your db host}}", "DB_NAME": "canvas", "BASE_URL": "{{your base URL}}", "POSTMARK_TOKEN": "{{your postmark token}}", "MARKETING_EMAIL_ADDRESS": "{{your marketing email address}}", "TRANSACTIONAL_EMAIL_ADDRESS": "{{your transactional email address}}", "AWS_ACCESS_KEY_ID": "{{the aws access key ID from the cloudformation output}}", "AWS_SECRET_ACCESS_KEY": "{{the aws secret access key from the cloudformation output}}", "ADMIN_PASSWORD": "{{your admin password}}" }, "ports": { "8080": "HTTP" } } }

Now is the time to replace all the values marked with {…} with the secrets you've gathered until now. There are a few values missing, which you can get with:

  • DB_HOST:

    $ aws lightsail get-relational-database \
    --relational-database-name canvasdb | \
    jq .relationalDatabase.masterEndpoint.address
  • BASE_URL:

    $ aws lightsail get-container-services \
    --service-name canvas | \
    jq '.containerServices[0].url'

Because containers.json has secrets in it, make sure to add it to your .gitignore file.

Finally, the time has come! Deploy your app with:

$ make build deploy

After it has finished deploying, the very last thing we need to do is run our migrations:

$ http -a admin:youradminpassword post 'https://yourbaseurl.amazonlightsail.com/migrate/up'

You should now be able to go to your app URL, sign up for a newsletter, get the confirmation email, click the link in it, and get a welcome email. Hooray! 🥳

If for some reason it's not, check your logs with and start debugging:

$ aws lightsail get-container-log --service-name canvas --container-name app

Cleaning up

As promised, here are the commands to shut everything down again, if you want to save some money and resources. To delete the database:

$ aws lightsail delete-relational-database \
--relational-database-name canvasdb

To delete the CloudFormation stack:

$ aws cloudformation delete-stack \
--stack-name canvas

And finally, to disable (but not delete) the app:

$ aws lightsail update-container-service \
--service-name canvas --is-disabled

Note that disabling the service will not free up its resources, and you will still be paying for it. If you don't want that, delete it entirely, but know that you'll have to set it up again later in the course.

Feature complete

That's it, we've finished the very first useful feature in our app! And learned about databases and migrations, queues, job runners, sending emails, integration testing, and more along the way.

Also, you now have a lot of very useful components that you can reuse across different projects. The database, along with its migrations and testing setup, is something that every web app with state can use. The queue and job runner is useful for anything you need to do that shouldn't be run as part of a request/response. The emailer can be extended for every kind of email you need to send from an app.

So what's next? Now that we have our first useful feature, let's review what it means to have our app running with a database and a queue.

Review questions

Sign up or log in to get review questions by email! 📧

Questions?

Get help on Twitter or by email.