Save newsletter signups

You'll learn
  • Adding a Postgres database and connecting to it.
  • Saving newsletter signups to the database.
  • Migrating the database.
  • Testing database functions.
  • Adding the database connection to your app health check.

I hope you had that snack, because we're going to be adding a lot of functionality in this section. 😎

Having a newsletter signup box on the front page of your app is much more fun if you can actually save the email addresses coming in, so of course that's what we're going to do now. We'll be using a Postgres database through Docker to save our state, both in development, tests, and in production. Why Postgres? Because it's popular, good, relational, and available everywhere, including the clouds and AWS Lightsail.

We'll be handling so-called migrations, which is a common term for the changes to your database tables and data as the application evolves. It's important to do this properly, so you can be sure your valuable data is always safe and your app stays up while you're changing things in the background.

Last but not least, we are going to add a database connection check to the health handler (that currently does nothing), so that we only route web traffic to an app container if it can still connect to the database.

As always, type in or copy the code yourself, or get it with:

$ git fetch && git checkout --track golangdk/newsletter-db

See the diff on Github.

Let's add that database!

The storage package

We'll now add a new package storage, which has permanent storage-related code in our app. This goes into storage/database.go:

storage/database.go
package storage import ( "context" "fmt" "time" _ "github.com/jackc/pgx/v4/stdlib" "github.com/jmoiron/sqlx" "go.uber.org/zap" ) // Database is the relational storage abstraction. type Database struct { DB *sqlx.DB host string port int user string password string name string maxOpenConnections int maxIdleConnections int connectionMaxLifetime time.Duration connectionMaxIdleTime time.Duration log *zap.Logger } // NewDatabaseOptions for NewDatabase. type NewDatabaseOptions struct { Host string Port int User string Password string Name string MaxOpenConnections int MaxIdleConnections int ConnectionMaxLifetime time.Duration ConnectionMaxIdleTime time.Duration Log *zap.Logger } // NewDatabase with the given options. // If no logger is provided, logs are discarded. func NewDatabase(opts NewDatabaseOptions) *Database { if opts.Log == nil { opts.Log = zap.NewNop() } return &Database{ host: opts.Host, port: opts.Port, user: opts.User, password: opts.Password, name: opts.Name, maxOpenConnections: opts.MaxOpenConnections, maxIdleConnections: opts.MaxIdleConnections, connectionMaxLifetime: opts.ConnectionMaxLifetime, connectionMaxIdleTime: opts.ConnectionMaxIdleTime, log: opts.Log, } } // Connect to the database. func (d *Database) Connect() error { d.log.Info("Connecting to database", zap.String("url", d.createDataSourceName(false))) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() var err error d.DB, err = sqlx.ConnectContext(ctx, "pgx", d.createDataSourceName(true)) if err != nil { return err } d.log.Debug("Setting connection pool options", zap.Int("max open connections", d.maxOpenConnections), zap.Int("max idle connections", d.maxIdleConnections), zap.Duration("connection max lifetime", d.connectionMaxLifetime), zap.Duration("connection max idle time", d.connectionMaxIdleTime)) d.DB.SetMaxOpenConns(d.maxOpenConnections) d.DB.SetMaxIdleConns(d.maxIdleConnections) d.DB.SetConnMaxLifetime(d.connectionMaxLifetime) d.DB.SetConnMaxIdleTime(d.connectionMaxIdleTime) return nil } func (d *Database) createDataSourceName(withPassword bool) string { password := d.password if !withPassword { password = "xxx" } return fmt.Sprintf("postgresql://%v:%v@%v:%v/%v?sslmode=disable", d.user, password, d.host, d.port, d.name) } // Ping the database. func (d *Database) Ping(ctx context.Context) error { ctx, cancel := context.WithTimeout(ctx, time.Second) defer cancel() if err := d.DB.PingContext(ctx); err != nil { return err } _, err := d.DB.ExecContext(ctx, `select 1`) return err }

It looks like a lot of code, but if you look closely, it doesn't actually do much. The Database struct holds the database connection pool through a neat little utility library called sqlx, which makes querying the database a little nicer. The rest of the fields are just for connection options, we'll get to those below.

NewDatabase is structured much in the same way as server.New from earlier, just creating a default no-op logger if one isn't supplied in the DatabaseOptions.

The Connect method opens a connection to the database with a timeout of 10 seconds, and then sets a few connection options. The options are:

  • SetMaxOpenConns, which sets the maximum number of open connections to the database. When using a database in Go, the database connection is really a pool of connections, ready to use for your app queries. This sets a cap on how many connections can be in that pool.
  • SetMaxIdleConns, which sets the maximum number of connections in the idle connection pool. If a connection isn't being used for a query anymore, it's idle, and this number caps the amount of idle connections.
  • SetConnMaxLifetime, which sets the maximum amount of time a connection may be reused. Connections in the pool are automatically replaced if they no longer work for one reason or another, but this sets a maximum lifetime even if they still work.
  • SetConnMaxIdleTime, which sets the maximum amount of time a connection may be idle. It may be closed after this time.

The values of these will depend on the environment, and are especially relevant when we deploy these changes to the cloud. You will see in a later section how to figure out what to set them to, but now you know what they mean.

createDataSourceName exists to build the connection URL, with a parameter to hide the password for logging purposes.

Last, the Ping method checks that we can still connect to the database. This is the method we will be using in the health check later.

For this to work, you need these dependencies:

$ go get -u github.com/jmoiron/sqlx github.com/jackc/pgx/v4

So far, so good. But there's no database to connect to yet!

Getting Postgres with Docker

Docker has a nice utility built in called compose. Compose takes a configuration file as input, and makes sure the containers in it are up and running according to the configuration. If the configuration changes, so do the containers. Sweet.

Put this in a file called docker-compose.yaml:

docker-compose.yaml
version: '3.8' services: postgres: image: postgres:12 environment: POSTGRES_USER: canvas POSTGRES_PASSWORD: 123 ports: - 5432:5432 volumes: - postgres:/var/lib/postgresql/data postgres-test: image: postgres:12 environment: POSTGRES_USER: test POSTGRES_PASSWORD: 123 POSTGRES_DB: template1 ports: - 5433:5432 volumes: postgres:

Now, run this command and see two database containers started on your development machine:

$ docker compose up -d
[+] Running 3/3
 ⠿ Network canvas_default            Created
 ⠿ Container canvas_postgres_1       Started
 ⠿ Container canvas_postgres-test_1  Started

The first database is used for development locally, and the second for integration tests described later. The important thing to note is that you can connect to the dev database using the user canvas, the password 123, and the database name canvas (it defaults to the user name if not set explicitly).

To test that the connection works, you can use the psql command, either directly if you have it installed, or through docker:

$ docker run --rm -e PGPASSWORD=123 postgres:12 psql -h host.docker.internal -U canvas -A -c "select 'excellent' as partytime"
partytime
excellent
(1 row)

If it doesn't, you could check the Docker logs for the database container to see if there's anything wrong:

$ docker logs canvas_postgres_1

Saving newsletter signups

Now it's finally time to write the methods to save the newsletter signups. Remember that signupper interface in the previous section that we used as a dependency for the HTTP handler, in handlers/newsletter.go?

handlers/newsletter.go
type signupper interface { SignupForNewsletter(ctx context.Context, email model.Email) (string, error) }

That's the interface we're implementing now. Put this in storage/newsletter.go:

storage/newsletter.go
package storage import ( "context" "crypto/rand" "fmt" "canvas/model" ) // SignupForNewsletter with the given email. Returns a token used for confirming the email address. func (d *Database) SignupForNewsletter(ctx context.Context, email model.Email) (string, error) { token, err := createSecret() if err != nil { return "", err } query := ` insert into newsletter_subscribers (email, token) values ($1, $2) on conflict (email) do update set token = excluded.token, updated = now()` _, err = d.DB.ExecContext(ctx, query, email, token) return token, err } func createSecret() (string, error) { secret := make([]byte, 32) if _, err := rand.Read(secret); err != nil { return "", err } return fmt.Sprintf("%x", secret), nil }

Here we create a random string token using crypto/rand (not math/rand, because numbers from that are guessable under certain circumstances) and insert the token and email address into a table called newsletter_subscribers that we haven't created yet. If the email already exists, we just update the token.

For this to work, we of course need to create the database table newsletter_subscribers. This is were the database migrations come in.

Tables and migrations

As already mentioned in the intro, migrations are a name for a standardized way to handle changes to your database. The whole idea is that you not only write the queries to change your database from the current state to the state you want, but also the reverse: the query to change it back from the state you (ideally) want, to the current state.

That way, if there is an unexpected error when upgrading your database tables, you already have the tool ready to roll back your changes, which makes everything much safer. The thing about unexpected errors is that you never expect them, and they will eventually happen to you, so better be prepared than panic during downtime. 😅

Also, these migrations will be run by a separate tool that's not part of the app. To explain why, I'll describe a common approach, and why that's a bad idea in a cloud environment.

Imagine that you have a single server with Postgres and your app running on it. You might have migrations in place, as you know it's a good idea. But to make things easy your app runs any new migrations on every app startup, so you're sure you can deploy code changes that depend on database changes.

What's wrong with this? Perhaps take a minute to think about this for a cloud setting, then read on.

Ready?

A couple of things are wrong here:

  • In a cloud, you often have multiple app containers. Should each of these run migrations at startup? Or just one of them? They would have to coordinate that somehow, or handle failures gracefully. No matter what, complexity on top of something that's already complex. Not a good idea.
  • If your migration fails, you can roll it back manually. But maybe your app then crashes and re-applies the migration, and now your migration fails again. Repeat.

That's why we run migrations seperately from the app.

And there's more. It's always a good idea to handle database changes in exactly this order:

  1. Update your code to work with both the current and the new database state, and deploy it and see it working.
  2. Write and run the database migration.
  3. Check that your database state is how you want it to be and your app works as expected.
  4. After some time, remove the code that works with the old database state, deploy and check that everything still works.

This is a safe way to make database changes. It may seem like a lot of work, but the data in your database is often the most valuable thing in your app. Companies have gone bankrupt losing their databases. Don't gamble with your data.

Whew, with that out of the way, how do we make and run migrations? Create these two files, first storage/migrations/1624619937-newsletter-subscribers.up.sql:

storage/migrations/1624619937-newsletter-subscribers.up.sql
create table newsletter_subscribers ( email text primary key, token text not null, confirmed bool not null default false, active bool not null default true, created timestamp not null default now(), updated timestamp not null default now() );

And storage/migrations/1624619937-newsletter-subscribers.down.sql (notice the down instead of up):

storage/migrations/1624619937-newsletter-subscribers.down.sql
drop table newsletter_subscribers;

It's conventional to have a number in the filename (here the unix timestamp) that is higher for newer migrations. That way, the migration tool automatically knows the ordering of the migrations. The part of the filename with the timestamp and the name after it is called the version, and it needs to be identical for both files. Here the version is 1624619937-newsletter-subscribers.

We're going to be using a small library for migrations that I've written, which is simply called migrate. Get it with:

$ go get -u github.com/maragudk/migrate

migrate can safely run each migration inside a single transaction, so if it fails, the changes will automatically be rolled back.

Then we just need to write a small tool to apply our migrations. Put this in cmd/migrate/main.go:

cmd/migrate/main.go
package main import ( "context" "fmt" "os" "github.com/maragudk/env" "github.com/maragudk/migrate" "go.uber.org/zap" "canvas/storage" ) func main() { os.Exit(start()) } func start() int { _ = env.Load() logEnv := env.GetStringOrDefault("LOG_ENV", "development") log, err := createLogger(logEnv) if err != nil { fmt.Println("Error setting up the logger:", err) return 1 } if len(os.Args) < 2 { log.Warn("Usage: migrate up|down|to") return 1 } if os.Args[1] == "to" && len(os.Args) < 3 { log.Info("Usage: migrate to <version>") return 1 } db := storage.NewDatabase(storage.NewDatabaseOptions{ Host: env.GetStringOrDefault("DB_HOST", "localhost"), Port: env.GetIntOrDefault("DB_PORT", 5432), User: env.GetStringOrDefault("DB_USER", ""), Password: env.GetStringOrDefault("DB_PASSWORD", ""), Name: env.GetStringOrDefault("DB_NAME", ""), }) if err := db.Connect(); err != nil { log.Error("Error connection to database", zap.Error(err)) return 1 } fsys := os.DirFS("storage/migrations") switch os.Args[1] { case "up": err = migrate.Up(context.Background(), db.DB.DB, fsys) case "down": err = migrate.Down(context.Background(), db.DB.DB, fsys) case "to": err = migrate.To(context.Background(), db.DB.DB, fsys, os.Args[2]) default: log.Error("Unknown command", zap.String("name", os.Args[1])) return 1 } if err != nil { log.Error("Error migrating", zap.Error(err)) return 1 } return 0 } func createLogger(env string) (*zap.Logger, error) { switch env { case "production": return zap.NewProduction() case "development": return zap.NewDevelopment() default: return zap.NewNop(), nil } }

and put this in a new file in the root of your project called .env, which is used by the new library github.com/maragudk/env to automatically load environment variables:

.env
DB_USER=canvas DB_PASSWORD=123 DB_NAME=canvas

This .env is typically not checked into version control, because it may later contain real passwords for services you depend on in development. Add it to your .gitignore file.

The migration tool is basically a script that connects to the database and passes it, along with the migrations directory, along to migrate. I won't go into details with it. You should now be able to run migrations all the way up when your database is running:

$ go run ./cmd/migrate up

To run migrations down and essentially delete all your tables, run:

$ go run ./cmd/migrate down

To migrate to a specific version:

$ go run ./cmd/migrate to 1624619937-newsletter-subscribers

About that table

With all those migrations out of the way, we can get back to the task at hand: newsletter signups. I never explained that database table, so let's see it again:

storage/migrations/1624619937-newsletter-subscribers.up.sql
create table newsletter_subscribers ( email text primary key, token text not null, confirmed bool not null default false, active bool not null default true, created timestamp not null default now(), updated timestamp not null default now() );

You can see the email and token that we inserted in the database function. The email is the primary key, so it can't have a duplicate. What are the rest of the fields doing?

We're planning ahead a bit and adding a confirmed field, for when an email address has been confirmed by the receipient and therefore defaults to false, an active field, for whether a subscriber should still receive emails or not, and then some handy timestamps. That's it.

Bonus: To null or not to null

Does it work?

Before hooking up the database to the server, let's write some tests to check that everything works as expected.

First, create a file integrationtest/database.go, which will have our database integration test helper function, just like the one for our server:

integrationtest/database.go
package integrationtest import ( "context" "os" "sync" "time" "github.com/jmoiron/sqlx" "github.com/maragudk/env" "github.com/maragudk/migrate" "canvas/storage" ) var once sync.Once // CreateDatabase for testing. // Usage: // db, cleanup := CreateDatabase() // defer cleanup() // … func CreateDatabase() (*storage.Database, func()) { env.MustLoad("../.env-test") once.Do(initDatabase) db, cleanup := connect("postgres") defer cleanup() dropConnections(db.DB, "template1") name := env.GetStringOrDefault("DB_NAME", "test") dropConnections(db.DB, name) db.DB.MustExec(`drop database if exists ` + name) db.DB.MustExec(`create database ` + name) return connect(name) } func initDatabase() { db, cleanup := connect("template1") defer cleanup() for err := db.Ping(context.Background()); err != nil; { time.Sleep(100 * time.Millisecond) } if err := migrate.Up(context.Background(), db.DB.DB, os.DirFS("../storage/migrations")); err != nil { panic(err) } if err := migrate.Down(context.Background(), db.DB.DB, os.DirFS("../storage/migrations")); err != nil { panic(err) } if err := migrate.Up(context.Background(), db.DB.DB, os.DirFS("../storage/migrations")); err != nil { panic(err) } if err := db.DB.Close(); err != nil { panic(err) } } func connect(name string) (*storage.Database, func()) { db := storage.NewDatabase(storage.NewDatabaseOptions{ Host: env.GetStringOrDefault("DB_HOST", "localhost"), Port: env.GetIntOrDefault("DB_PORT", 5432), User: env.GetStringOrDefault("DB_USER", "test"), Password: env.GetStringOrDefault("DB_PASSWORD", ""), Name: name, MaxOpenConnections: 10, MaxIdleConnections: 10, }) if err := db.Connect(); err != nil { panic(err) } return db, func() { if err := db.DB.Close(); err != nil { panic(err) } } } func dropConnections(db *sqlx.DB, name string) { db.MustExec(` select pg_terminate_backend(pg_stat_activity.pid) from pg_stat_activity where pg_stat_activity.datname = $1 and pid <> pg_backend_pid()`, name) }

It uses a little trick in Postgres to make integration tests really fast and safe: All migrations are run on a special database called template1 just on the first test run. That database is the one that all new databases copy their state from, so when dropping and recreating our test database, the migrations will already be there. But this happens much faster than running migrations each time. Neat!

As an added bonus, we also run all migrations first up, then down, then up again, so our down migrations are also tested. Double-neat.

Bonus: About template1

For this to work, add another environment file .env-test (and feel free to add this one to your version control):

.env-test
DB_PASSWORD=123 DB_PORT=5433

Finally, we can add some actual tests. Into storage/newsletter_test.go, of course:

storage/newsletter_test.go
package storage_test import ( "context" "testing" "github.com/matryer/is" "canvas/integrationtest" ) func TestDatabase_SignupForNewsletter(t *testing.T) { integrationtest.SkipIfShort(t) t.Run("signs up", func(t *testing.T) { is := is.New(t) db, cleanup := integrationtest.CreateDatabase() defer cleanup() expectedToken, err := db.SignupForNewsletter(context.Background(), "me@example.com") is.NoErr(err) is.Equal(64, len(expectedToken)) var email, token string err = db.DB.QueryRow(`select email, token from newsletter_subscribers`).Scan(&email, &token) is.NoErr(err) is.Equal("me@example.com", email) is.Equal(expectedToken, token) expectedToken2, err := db.SignupForNewsletter(context.Background(), "me@example.com") is.NoErr(err) is.True(expectedToken != expectedToken2) err = db.DB.QueryRow(`select email, token from newsletter_subscribers`).Scan(&email, &token) is.NoErr(err) is.Equal("me@example.com", email) is.Equal(expectedToken2, token) }) }

We've just tested that the email address and token gets inserted into the table, and that it's still there if we do it twice in a row, without any errors. Make sure your tests all pass with:

$ make test-integration

Hooking up the database to the server

We are now ready to hook everything up to the server and make it run in our app.

In server/server.go, we add the database to the server options and the Server struct, and call Database.Connect at the beginning of Server.Start:

server/server.go
// Package server contains everything for setting up and running the HTTP server. package server import ( "context" "errors" "fmt" "net" "net/http" "strconv" "time" "github.com/go-chi/chi/v5" "go.uber.org/zap" "canvas/storage" ) type Server struct { address string database *storage.Database log *zap.Logger mux chi.Router server *http.Server } type Options struct { Database *storage.Database Host string Log *zap.Logger Port int } func New(opts Options) *Server { if opts.Log == nil { opts.Log = zap.NewNop() } address := net.JoinHostPort(opts.Host, strconv.Itoa(opts.Port)) mux := chi.NewMux() return &Server{ address: address, database: opts.Database, log: opts.Log, mux: mux, server: &http.Server{ Addr: address, Handler: mux, ReadTimeout: 5 * time.Second, ReadHeaderTimeout: 5 * time.Second, WriteTimeout: 5 * time.Second, IdleTimeout: 5 * time.Second, }, } } // Start the Server by setting up routes and listening for HTTP requests on the given address. func (s *Server) Start() error { if err := s.database.Connect(); err != nil { return fmt.Errorf("error connecting to database: %w", err) } s.setupRoutes() s.log.Info("Starting", zap.String("address", s.address)) if err := s.server.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) { return fmt.Errorf("error starting server: %w", err) } return nil } // …

Then in cmd/server/main.go, we can create the Database using storage.NewDatabase. Note that we're also using the .env file here now:

cmd/server/main.go
// Package main is the entry point to the server. It reads configuration, sets up logging and error handling, // handles signals from the OS, and starts and stops the server. package main import ( "context" "fmt" "os" "os/signal" "syscall" "time" "github.com/maragudk/env" "go.uber.org/zap" "golang.org/x/sync/errgroup" "canvas/server" "canvas/storage" ) // release is set through the linker at build time, generally from a git sha. // Used for logging and error reporting. var release string func main() { os.Exit(start()) } func start() int { _ = env.Load() logEnv := env.GetStringOrDefault("LOG_ENV", "development") log, err := createLogger(logEnv) if err != nil { fmt.Println("Error setting up the logger:", err) return 1 } log = log.With(zap.String("release", release)) defer func() { // If we cannot sync, there's probably something wrong with outputting logs, // so we probably cannot write using fmt.Println either. So just ignore the error. _ = log.Sync() }() host := env.GetStringOrDefault("HOST", "localhost") port := env.GetIntOrDefault("PORT", 8080) s := server.New(server.Options{ Database: createDatabase(log), Host: host, Log: log, Port: port, }) ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGTERM, syscall.SIGINT) defer stop() eg, ctx := errgroup.WithContext(ctx) eg.Go(func() error { if err := s.Start(); err != nil { log.Info("Error starting server", zap.Error(err)) return err } return nil }) <-ctx.Done() eg.Go(func() error { if err := s.Stop(); err != nil { log.Info("Error stopping server", zap.Error(err)) return err } return nil }) if err := eg.Wait(); err != nil { return 1 } return 0 } func createLogger(env string) (*zap.Logger, error) { switch env { case "production": return zap.NewProduction() case "development": return zap.NewDevelopment() default: return zap.NewNop(), nil } } func createDatabase(log *zap.Logger) *storage.Database { return storage.NewDatabase(storage.NewDatabaseOptions{ Host: env.GetStringOrDefault("DB_HOST", "localhost"), Port: env.GetIntOrDefault("DB_PORT", 5432), User: env.GetStringOrDefault("DB_USER", ""), Password: env.GetStringOrDefault("DB_PASSWORD", ""), Name: env.GetStringOrDefault("DB_NAME", ""), MaxOpenConnections: env.GetIntOrDefault("DB_MAX_OPEN_CONNECTIONS", 10), MaxIdleConnections: env.GetIntOrDefault("DB_MAX_IDLE_CONNECTIONS", 10), ConnectionMaxLifetime: env.GetDurationOrDefault("DB_CONNECTION_MAX_LIFETIME", time.Hour), Log: log, }) }

Now we can get rid of the temporary mock in our routes and provide the database instead:

server/routes.go
package server import ( "canvas/handlers" ) func (s *Server) setupRoutes() { handlers.Health(s.mux, s.database) handlers.FrontPage(s.mux) handlers.NewsletterSignup(s.mux, s.database) handlers.NewsletterThanks(s.mux) }

And last, but definitely not least, add the database to the server integration test helper in integrationtest/server.go:

integrationtest/server.go
package integrationtest import ( "net/http" "testing" "time" "canvas/server" ) // CreateServer for testing on port 8081, returning a cleanup function that stops the server. // Usage: // cleanup := CreateServer() // defer cleanup() func CreateServer() func() { db, cleanupDB := CreateDatabase() s := server.New(server.Options{ Host: "localhost", Port: 8081, Database: db, }) go func() { if err := s.Start(); err != nil { panic(err) } }() for { _, err := http.Get("http://localhost:8081/") if err == nil { break } time.Sleep(5 * time.Millisecond) } return func() { if err := s.Stop(); err != nil { panic(err) } cleanupDB() } } // …

Note that the database cleanup is itself called in the server cleanup function, after the server is stopped.

The health check

We're almost there! The last thing we need is the database ping in the health check. As mentioned in the beginning of this section, we do this so that the load balancer routes traffic away from an app container if it cannot connect to the database anymore, hoping that another app container can. This also means that your app is down if your database is down.

This is a choice you have to make while designing the app. You could handle the database being down everywhere you use it, and degrade gracefully if possible (for example turning certain features off, but otherwise still serving traffic). Here I've chosen the simpler route of essentially turning the app off if the database is down.

We're using the now familiar pattern of creating a small interface, pinger, that uses just the function the handler is interested in, namely Ping, in handlers/health.go:

handlers/health.go
package handlers import ( "context" "net/http" "github.com/go-chi/chi/v5" ) type pinger interface { Ping(ctx context.Context) error } func Health(mux chi.Router, p pinger) { mux.Get("/health", func(w http.ResponseWriter, r *http.Request) { if err := p.Ping(r.Context()); err != nil { http.Error(w, err.Error(), http.StatusBadGateway) return } }) }

Why HTTP 502 Bad Gateway? That's a status code often used for signalling that an underlying resource is at fault, not the app itself.

Also adjust the tests to check for this happening, in handlers/health_test.go:

handlers/health_test.go
package handlers_test import ( "context" "errors" "io" "net/http" "net/http/httptest" "testing" "github.com/go-chi/chi/v5" "github.com/matryer/is" "canvas/handlers" ) type pingerMock struct { err error } func (p *pingerMock) Ping(ctx context.Context) error { return p.err } func TestHealth(t *testing.T) { t.Run("returns 200", func(t *testing.T) { is := is.New(t) mux := chi.NewMux() handlers.Health(mux, &pingerMock{}) code, _, _ := makeGetRequest(mux, "/health") is.Equal(http.StatusOK, code) }) t.Run("returns 502 if the database cannot be pinged", func(t *testing.T) { is := is.New(t) mux := chi.NewMux() handlers.Health(mux, &pingerMock{err: errors.New("oh no")}) code, _, _ := makeGetRequest(mux, "/health") is.Equal(http.StatusBadGateway, code) }) } // …

Try it out!

Finally! We've arrived at a place where you can start the server and try out the signup form for yourself. Please do, and feel free to clap with joy afterwards if everything works as expected. If not, run your tests, figure out what's wrong, and try again.

That was a lot of knowledge for a single section. It's an important one, because we'll be using the same approach again and again for all new state our app needs to store. Luckily, all the setup code is now done.

What's next? Sending people who sign up an email to confirm their email address, and afterwards sending them a nice welcome email. 🤗📧

Review questions

Sign up or log in to get review questions with teacher feedback by email! 📧

Questions?

Get help at support@golang.dk.