Making Your Go Backend 10x Faster with Redis: REST API Caching and Secure JWT Token Rotation
A Redis caching strategy which features user authentication with JWT, job posting, and job enrollment — with Redis handling both response caching and refresh token storage.
Have you ever noticed some APIs respond almost instantly while others keep you waiting? The difference often comes down to one thing: caching. When your backend hits the database on every single request — even for data that rarely changes — you’re leaving a lot of performance on the table.
In this article, we’ll build a real-world Go REST API for a job listings platform with user authentication, job posting, and job enrollment features. Then we’ll layer in a Redis caching strategy that dramatically reduces database load and cuts response times on the most frequently accessed endpoints.
We’ll follow a hexagonal architecture (also known as ports and adapters), which keeps the caching layer completely decoupled from our business logic — meaning Redis can be swapped out or removed without touching a single service or handler.
By the end, you’ll have a production-ready pattern for cache-aside reads, targeted cache invalidation on writes, and refresh token storage in Redis — all wired together in a clean, maintainable Go codebase.
In case you’re unfamiliar with the tools we’ll be using: Gin is a lightweight and blazing-fast HTTP web framework for Go that makes it easy to build REST APIs without a lot of boilerplate. Redis is an in-memory key-value store that can function as a cache, message broker, or database — we’ll use it here for both caching API responses and storing refresh tokens for our auth system. GORM is Go’s most popular ORM, which we’ll use to interact with a PostgreSQL database.
The complete source code for everything covered in this article is available on GitHub: github.com/yehezkiel1086/go-redis-job-listings
Prerequisites
- Familiarity with Go, Gin, and basic REST API concepts
- Basic understanding of Redis and what caching means
- Docker installed (we’ll spin up PostgreSQL and Redis as containers)
- A working Go installation (1.21 or later)
Getting Started
Prerequisites
Before we begin, make sure you have the following installed on your machine:
- Go 1.21 or later
- Docker and Docker Compose
- Task — a task runner we’ll use to simplify common commands
- Air — a live reload tool for Go development
Project Structure
Our project follows a hexagonal architecture, keeping infrastructure concerns (HTTP, database, cache) completely separated from core business logic. Here’s the full structure we’ll be working with:
|-- Taskfile.yml
|-- cmd
| `-- http
| `-- main.go
|-- docker-compose.yml
|-- go.mod
|-- go.sum
`-- internal
|-- adapter
| |-- config
| | `-- config.go
| |-- handler
| | |-- auth.go
| | |-- enroll.go
| | |-- job.go
| | |-- middleware.go
| | |-- router.go
| | `-- user.go
| `-- storage
| |-- postgres
| | |-- db.go
| | `-- repository
| | |-- enroll.go
| | |-- job.go
| | `-- user.go
| `-- redis
| `-- redis.go
`-- core
|-- domain
| |-- enroll.go
| |-- error.go
| |-- job.go
| |-- jwt.go
| `-- user.go
|-- port
| |-- auth.go
| |-- cache.go
| |-- enroll.go
| |-- job.go
| `-- user.go
|-- service
| |-- auth.go
| |-- enroll.go
| |-- job.go
| `-- user.go
`-- util
|-- cache.go
|-- helper.go
|-- jwt.go
`-- password.go
The adapter layer contains everything that touches the outside world — HTTP handlers, database drivers, and Redis. The core layer contains our domain models, port interfaces, and service logic, and has zero knowledge of any external infrastructure.
Installing Dependencies
Initialize the project and install the required packages:
go mod init github.com/yourusername/go-redis-job-listings
go get github.com/gin-gonic/gin
go get gorm.io/gorm
go get gorm.io/driver/postgres
go get github.com/redis/go-redis/v9
go get github.com/golang-jwt/jwt/v5
go get github.com/joho/godotenv
go get golang.org/x/crypto/bcrypt
Setting Up the Task Runner
We’ll use Task as our command runner to avoid typing long commands repeatedly. Install it then initialize the Taskfile:
# macOS
brew install go-task
# or via npm
npm install -g @go-task/cli
# initialize
task --init
Update Taskfile.yml with the following:
# https://taskfile.dev
version: '3'
dotenv:
- .env
tasks:
compose:up:
desc: "run all docker containers"
cmd: docker compose up -d
compose:down:
desc: "stop all docker containers"
cmd: docker compose down
db:cli:
desc: "access postgres cli"
cmd: docker exec -it postgres psql -U {{ .DB_USER }} -b {{ .DB_NAME }}
redis:cli:
desc: "access redis cli"
cmd: docker exec -it redis redis-cli -a {{ .REDIS_PASSWORD }}
dev:
desc: "serve backend api"
cmd: air
With this in place, you can run task compose:up to start your containers, task dev to start the development server, and task redis:cli to jump into the Redis CLI — all without memorizing Docker commands.
Setting Up Live Reload with Air
Air watches your Go files and automatically rebuilds and restarts the server on every change. Install and initialize it:
go install github.com/air-verse/air@latest
air init
This generates a .air.toml config file. The most important line is in the [build] section — make sure it points to your entrypoint:
cmd = "go build -o ./tmp/main.exe ./cmd/http/main.go"
This tells Air to compile from cmd/http/main.go and output the binary to tmp/. The rest of the .air.toml defaults are fine as-is.
Environment Variables
Create a .env file in the project root:
APP_NAME=go-redis-job-listings
APP_ENV=development
HTTP_HOST=127.0.0.1
HTTP_PORT=8080
DB_HOST=127.0.0.1
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=admin
DB_NAME=job_listings
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_PASSWORD=admin
REDIS_DB=0
ACCESS_TOKEN_SECRET=2b9ac81db8f8a3bb8764011b9d278...
REFRESH_TOKEN_SECRET=cd12ce92258860d115afd29254f3...
# access token in mins, refresh in days
ACCESS_TOKEN_DURATION=15
REFRESH_TOKEN_DURATION=7
Generate secure random secrets for ACCESS_TOKEN_SECRET and REFRESH_TOKEN_SECRET in production — never commit real secrets to version control.
Spinning Up PostgreSQL and Redis with Docker
Our docker-compose.yml defines two services — a PostgreSQL 17 database and a Redis 7.4 instance, both with persistent volumes so data survives container restarts:
services:
db:
image: postgres:17-alpine
restart: always
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "5432:5432"
volumes:
- ./postgres_data:/var/lib/postgresql/data
container_name: postgres
redis:
image: redis:7.4-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning --requirepass ${REDIS_PASSWORD}
volumes:
- ./redis_data:/data
container_name: redis
volumes:
postgres_data:
redis_data:
Start both containers with:
task compose:up
You can verify Redis is running by jumping into its CLI:
task redis:cli
If you see the 127.0.0.1:6379> prompt, you're good to go.
Connecting to PostgreSQL and Redis
The config.go file reads all environment variables into a typed Container struct, which gets passed through the application via dependency injection. In non-production environments it loads from .env automatically via godotenv:
func New() (*Container, error) {
if os.Getenv("APP_ENV") != "production" {
err := godotenv.Load()
if err != nil {
return nil, err
}
}
// ...
}
The PostgreSQL connection uses GORM with a standard DSN string, and the Redis connection uses go-redis with a Ping check on startup — so the app fails fast if either service is unreachable rather than crashing later during a request:
// postgres: fails fast if DB is unreachable
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
// redis: fails fast if Redis is unreachable
if err := client.Ping(ctx).Err(); err != nil {
return nil, err
}
Here’s the complete adapter/storage/postgres/db.go:
package postgres
import (
"context"
"fmt"
"github.com/yehezkiel1086/go-redis-job-listings/internal/adapter/config"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type DB struct {
db *gorm.DB
}
func New(ctx context.Context, conf *config.DB) (*DB, error) {
dsn := fmt.Sprintf("host=%s user=%s password=%s dbname=%s port=%s sslmode=disable TimeZone=Asia/Jakarta", conf.Host, conf.User, conf.Password, conf.Name, conf.Port)
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, err
}
return &DB{db: db}, nil
}
func (d *DB) GetDB() *gorm.DB {
return d.db
}
func (d *DB) Migrate(dbs ...any) error {
return d.db.AutoMigrate(dbs...)
}
And here’s the complete adapter/storage/redis/redis.go:
package redis
import (
"context"
"fmt"
"strconv"
"time"
"github.com/redis/go-redis/v9"
"github.com/yehezkiel1086/go-redis-job-listings/internal/adapter/config"
)
type Redis struct {
client *redis.Client
}
func New(ctx context.Context, conf *config.Redis) (*Redis, error) {
db, err := strconv.Atoi(conf.DB)
if err != nil {
return nil, err
}
client := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", conf.Host, conf.Port),
Password: conf.Password,
DB: db,
Protocol: 2,
})
if err := client.Ping(ctx).Err(); err != nil {
return nil, err
}
return &Redis{client}, nil
}
func (r *Redis) Close() error {
return r.client.Close()
}
func (r *Redis) Set(ctx context.Context, key string, value any, ttl time.Duration) error {
return r.client.Set(ctx, key, value, ttl).Err()
}
func (r *Redis) Get(ctx context.Context, key string) (string, error) {
val, err := r.client.Get(ctx, key).Result()
if err != nil {
return "", err
}
return val, nil
}
func (r *Redis) Delete(ctx context.Context, keys ...string) error {
return r.client.Del(ctx, keys...).Err()
}
func (r *Redis) DeleteByPrefix(ctx context.Context, prefix string) error {
var cursor uint64
for {
keys, nextCursor, err := r.client.Scan(ctx, cursor, prefix, 100).Result()
if err != nil {
return err
}
for _, key := range keys {
if err := r.client.Del(ctx, key).Err(); err != nil {
return err
}
}
cursor = nextCursor
if cursor == 0 {
break
}
}
return nil
}
With both services connected, we’re ready to start building the core of the application.
Redis Caching Implementation
Before we dive into the individual features, let’s understand the Redis infrastructure that powers the caching across the entire application. There are three files that work together: the concrete Redis adapter, the cache port interface, and the cache utility helpers.
The Cache Port
Following hexagonal architecture principles, we never let the application core depend directly on Redis. Instead, we define a CacheRepository interface in the port layer that describes what caching operations we need, without caring how they're implemented:
// port/cache.go
package port
import (
"context"
"time"
)
type CacheRepository interface {
Set(ctx context.Context, key string, value any, ttl time.Duration) error
Get(ctx context.Context, key string) (string, error)
Delete(ctx context.Context, keys ...string) error
DeleteByPrefix(ctx context.Context, prefix string) error
}
Four operations cover everything we need. Set stores a value with an expiry. Get retrieves it. Delete removes one or more keys in a single call. DeleteByPrefix scans and removes all keys matching a pattern — useful when we need to wipe an entire group of related cache entries at once, like all filtered variations of a job listing query.
The //go:generate comment at the top of the file also means you can auto-generate a mock implementation for unit testing with a single go generate ./... command.
The Redis Adapter
The concrete implementation lives in the adapter layer and wraps the go-redis client:
// adapter/storage/redis/redis.go
package redis
import (
"context"
"fmt"
"strconv"
"time"
"github.com/redis/go-redis/v9"
"github.com/yehezkiel1086/go-redis-job-listings/internal/adapter/config"
)
type Redis struct {
client *redis.Client
}
func New(ctx context.Context, conf *config.Redis) (*Redis, error) {
db, err := strconv.Atoi(conf.DB)
if err != nil {
return nil, err
}
client := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", conf.Host, conf.Port),
Password: conf.Password,
DB: db,
Protocol: 2,
})
if err := client.Ping(ctx).Err(); err != nil {
return nil, err
}
return &Redis{client}, nil
}
The New function initializes the client and immediately calls Ping — if Redis is unreachable at startup, the application fails fast with a clear error rather than silently starting and crashing later during a real request. Protocol: 2 enables RESP3, the latest Redis protocol, which supports more efficient data types and push notifications compared to the older RESP2.
The four operations that implement CacheRepository are thin wrappers around go-redis:
func (r *Redis) Set(ctx context.Context, key string, value any, ttl time.Duration) error {
return r.client.Set(ctx, key, value, ttl).Err()
}
func (r *Redis) Get(ctx context.Context, key string) (string, error) {
val, err := r.client.Get(ctx, key).Result()
if err != nil {
return "", err
}
return val, nil
}
func (r *Redis) Delete(ctx context.Context, keys ...string) error {
return r.client.Del(ctx, keys...).Err()
}
Delete accepts variadic keys, which means we can invalidate multiple cache entries in a single Redis round-trip. You'll see this used frequently in the service layer — for example, when a user is updated, both user:{id} and user:all are deleted in one call: cache.Delete(ctx, "user:1", "user:all").
The most interesting method is DeleteByPrefix:
func (r *Redis) DeleteByPrefix(ctx context.Context, prefix string) error {
var cursor uint64
for {
keys, nextCursor, err := r.client.Scan(ctx, cursor, prefix, 100).Result()
if err != nil {
return err
}
for _, key := range keys {
if err := r.client.Del(ctx, key).Err(); err != nil {
return err
}
}
cursor = nextCursor
if cursor == 0 {
break
}
}
return nil
}
This is needed because some cache entries can’t be tracked by a single fixed key. The job listings cache, for example, stores each unique filter combination as its own key — job:all:remote--senior-Jakarta-, job:all:full_time---Go developer-, and so on. When a job is created or updated, we can't know which filter keys exist in Redis at that moment, so we use SCAN to find all keys matching the job:all* pattern and delete them all.
We use SCAN instead of KEYS deliberately. KEYS blocks the entire Redis server while it scans — in production with millions of keys, that's a serious performance problem. SCAN is non-blocking and iterates in batches of 100 keys per call, using a cursor to page through results until the cursor returns to 0, signaling the full scan is complete.
Cache Utilities
Three small helpers in util/cache.go keep the caching code in the service layer clean and consistent:
// util/cache.go
package util
import (
"encoding/json"
"fmt"
)
func GenerateCacheKey(prefix string, params any) string {
return fmt.Sprintf("%s:%v", prefix, params)
}
func GenerateCacheKeyParams(params ...any) string {
var str string
for i, param := range params {
str += fmt.Sprintf("%v", param)
if i != len(params)-1 {
str += "-"
}
}
return str
}
func Serialize(data any) ([]byte, error) {
return json.Marshal(data)
}
func Deserialize(data []byte, output any) error {
return json.Unmarshal(data, output)
}
GenerateCacheKey produces keys like user:1 or job:42 by combining a prefix with a single identifier. GenerateCacheKeyParams builds compound parameter strings like remote--senior-Jakarta- by joining multiple values with - separators — this is used to encode filter structs into cache keys so each unique filter combination gets its own cache entry.
Serialize and Deserialize are thin wrappers around json.Marshal and json.Unmarshal. Since Redis stores everything as strings or bytes, we serialize Go structs to JSON before storing them and deserialize back on retrieval. Every cached value in this application — users, jobs, enrollments — goes through these two functions.
Here’s the complete flow to visualize how all three files work together on a cache hit versus a cache miss:
GET /users/1
│
▼
UserService.GetUserByID
│
├── cache.Get("user:1")
│ │
│ ├── HIT → Deserialize → return UserResponse ✓ (no DB call)
│ │
│ └── MISS → repo.GetUserByID(1)
│ │
│ └── PostgreSQL query
│ │
│ └── Serialize → cache.Set("user:1", data, 5m)
│ │
│ └── return UserResponse ✓
With this infrastructure in place, every service in the application — users, jobs, enrollments, and auth — can plug into the same caching layer through the CacheRepository interface, with no direct dependency on Redis anywhere in the core business logic.
User
The user layer handles registration, profile management, and admin operations. It’s also where we first introduce Redis caching — so let’s walk through each layer and explain exactly where and why caching is applied.
Domain
The domain model is the foundation of everything. User embeds gorm.Model for automatic id, created_at, updated_at, and deleted_at fields. The Role type is a typed uint32 constant rather than a plain string, which prevents invalid role values from being assigned at compile time.
// domain/user.go
package domain
import "gorm.io/gorm"
type Role uint32
const (
RoleUser Role = 2001
RoleAdmin Role = 5150
)
type User struct {
gorm.Model
Email string `json:"email" gorm:"size:255;unique;not null"`
Password string `json:"password" gorm:"size:255;not null"`
Name string `json:"name" gorm:"size:255;not null"`
Role Role `json:"role" gorm:"not null;default:2001"`
Jobs []Job `json:"jobs,omitempty" gorm:"foreignKey:UserID"`
}
type RegisterRequest struct {
Email string `json:"email" binding:"required,email"`
Password string `json:"password" binding:"required,min=6"`
Name string `json:"name" binding:"required,min=3,max=25"`
}
type UserResponse struct {
ID uint `json:"id"`
Email string `json:"email"`
Name string `json:"name"`
Role Role `json:"role"`
}
Notice that RegisterRequest and UserResponse are intentionally separate from User. RegisterRequest enforces input validation via binding tags, and UserResponse strips sensitive fields like Password from API responses — the User struct itself is never returned directly to the client.
Port
The port layer defines the contracts between layers as Go interfaces. Nothing in the service or handler knows about Postgres or Redis directly — they only depend on these interfaces, which is what makes the architecture testable and swappable.
// port/user.go
package port
import (
"context"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/domain"
)
type UserRepository interface {
CreateUser(ctx context.Context, user *domain.User) (*domain.User, error)
GetUserByEmail(ctx context.Context, email string) (*domain.User, error)
GetUserByID(ctx context.Context, id uint) (*domain.User, error)
UpdateUser(ctx context.Context, user *domain.User) (*domain.User, error)
DeleteUserByID(ctx context.Context, id uint) error
GetAllUsers(ctx context.Context) ([]domain.User, error)
}
type UserService interface {
RegisterUser(ctx context.Context, user *domain.User) (*domain.UserResponse, error)
GetUserByID(ctx context.Context, id uint) (*domain.UserResponse, error)
GetAllUsers(ctx context.Context) ([]domain.UserResponse, error)
UpdateUser(ctx context.Context, id uint, updates *domain.User) (*domain.UserResponse, error)
DeleteUserByID(ctx context.Context, id uint) error
}
UserRepository speaks to the database. UserService speaks to the handler. Neither interface mentions Redis — the caching concern is an implementation detail hidden inside the service.
Repository
The repository is the only layer that talks directly to PostgreSQL via GORM. It has no caching logic whatsoever — that’s intentional. The repository’s single responsibility is database access.
// repository/user.go
package repository
import (
"context"
"strings"
"github.com/yehezkiel1086/go-redis-job-listings/internal/adapter/storage/postgres"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/domain"
)
type UserRepository struct {
db *postgres.DB
}
func NewUserRepository(db *postgres.DB) *UserRepository {
return &UserRepository{db: db}
}
func (r *UserRepository) CreateUser(ctx context.Context, user *domain.User) (*domain.User, error) {
if err := r.db.GetDB().WithContext(ctx).Create(user).Error; err != nil {
if strings.Contains(err.Error(), "duplicate key") {
return nil, domain.ErrDuplicateEmail
}
return nil, err
}
return user, nil
}
func (r *UserRepository) GetUserByEmail(ctx context.Context, email string) (*domain.User, error) {
var user domain.User
if err := r.db.GetDB().WithContext(ctx).Where("email = ?", email).First(&user).Error; err != nil {
return nil, err
}
return &user, nil
}
func (r *UserRepository) GetUserByID(ctx context.Context, id uint) (*domain.User, error) {
var user domain.User
if err := r.db.GetDB().WithContext(ctx).First(&user, id).Error; err != nil {
return nil, err
}
return &user, nil
}
func (r *UserRepository) UpdateUser(ctx context.Context, user *domain.User) (*domain.User, error) {
if err := r.db.GetDB().WithContext(ctx).Save(user).Error; err != nil {
return nil, err
}
return user, nil
}
func (r *UserRepository) DeleteUserByID(ctx context.Context, id uint) error {
return r.db.GetDB().WithContext(ctx).Delete(&domain.User{}, id).Error
}
func (r *UserRepository) GetAllUsers(ctx context.Context) ([]domain.User, error) {
var users []domain.User
if err := r.db.GetDB().WithContext(ctx).Find(&users).Error; err != nil {
return nil, err
}
return users, nil
}
One thing worth noting in CreateUser — we inspect the error message for "duplicate key" and map it to our domain sentinel error ErrDuplicateEmail. This keeps the service layer clean from database-specific error types.
Service — Where Redis Comes In
This is where the caching strategy lives. The UserService holds both a UserRepository and a CacheRepository, and applies the cache-aside pattern on every read operation.
// service/user.go
package service
import (
"context"
"time"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/domain"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/port"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/util"
"golang.org/x/crypto/bcrypt"
)
const (
userPrefix = "user"
allUsersKey = "user:all"
userCacheTTL = 5 * time.Minute
)
type UserService struct {
repo port.UserRepository
cache port.CacheRepository
}
func NewUserService(repo port.UserRepository, cache port.CacheRepository) *UserService {
return &UserService{repo: repo, cache: cache}
}
The cache keys follow a consistent naming convention. Individual users are stored under user:{id} (e.g. user:1, user:42), and the full user list is stored under the fixed key user:all. This makes invalidation straightforward — we always know exactly which keys to delete.
Read operations — cache-aside pattern:
func (s *UserService) GetUserByID(ctx context.Context, id uint) (*domain.UserResponse, error) {
key := util.GenerateCacheKey(userPrefix, id)
// cache hit: return early.
if cached, err := s.cache.Get(ctx, key); err == nil {
var resp domain.UserResponse
if err := util.Deserialize([]byte(cached), &resp); err == nil {
return &resp, nil
}
}
// cache miss: fetch from DB.
user, err := s.repo.GetUserByID(ctx, id)
if err != nil {
return nil, domain.ErrNotFound
}
resp := util.ToUserResponse(user)
// populate cache, ignore error to not block the response.
if value, err := util.Serialize(resp); err == nil {
_ = s.cache.Set(ctx, key, value, userCacheTTL)
}
return resp, nil
}
func (s *UserService) GetAllUsers(ctx context.Context) ([]domain.UserResponse, error) {
if cached, err := s.cache.Get(ctx, allUsersKey); err == nil {
var responses []domain.UserResponse
if err := util.Deserialize([]byte(cached), &responses); err == nil {
return responses, nil
}
}
users, err := s.repo.GetAllUsers(ctx)
if err != nil {
return nil, err
}
responses := make([]domain.UserResponse, 0, len(users))
for _, u := range users {
responses = append(responses, *util.ToUserResponse(&u))
}
if value, err := util.Serialize(responses); err == nil {
_ = s.cache.Set(ctx, allUsersKey, responses, userCacheTTL)
}
return responses, nil
}
The cache-aside flow is always the same three steps: check the cache first, fall back to the database on a miss, then populate the cache for next time. Cache errors are intentionally swallowed with _ — if Redis goes down, the service degrades gracefully to database reads rather than returning errors to the client.
Write operations — cache invalidation:
func (s *UserService) RegisterUser(ctx context.Context, user *domain.User) (*domain.UserResponse, error) {
existing, err := s.repo.GetUserByEmail(ctx, user.Email)
if err == nil && existing != nil {
return nil, domain.ErrDuplicateEmail
}
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(user.Password), bcrypt.DefaultCost)
if err != nil {
return nil, err
}
user.Password = string(hashedPassword)
if user.Role == 0 {
user.Role = domain.RoleUser
}
created, err := s.repo.CreateUser(ctx, user)
if err != nil {
return nil, err
}
// invalidate all users cache since the list has changed.
_ = s.cache.Delete(ctx, allUsersKey)
return util.ToUserResponse(created), nil
}
func (s *UserService) UpdateUser(ctx context.Context, id uint, updates *domain.User) (*domain.UserResponse, error) {
user, err := s.repo.GetUserByID(ctx, id)
if err != nil {
return nil, domain.ErrNotFound
}
if updates.Name != "" {
user.Name = updates.Name
}
if updates.Email != "" && updates.Email != user.Email {
existing, _ := s.repo.GetUserByEmail(ctx, updates.Email)
if existing != nil {
return nil, domain.ErrDuplicateEmail
}
user.Email = updates.Email
}
if updates.Password != "" {
hashed, err := bcrypt.GenerateFromPassword([]byte(updates.Password), bcrypt.DefaultCost)
if err != nil {
return nil, err
}
user.Password = string(hashed)
}
if updates.Role != 0 {
user.Role = updates.Role
}
updated, err := s.repo.UpdateUser(ctx, user)
if err != nil {
return nil, err
}
// invalidate both the individual entry and the list.
_ = s.cache.Delete(ctx, util.GenerateCacheKey(userPrefix, id), allUsersKey)
return util.ToUserResponse(updated), nil
}
func (s *UserService) DeleteUserByID(ctx context.Context, id uint) error {
_, err := s.repo.GetUserByID(ctx, id)
if err != nil {
return domain.ErrNotFound
}
if err := s.repo.DeleteUserByID(ctx, id); err != nil {
return err
}
// invalidate both the individual entry and the list.
_ = s.cache.Delete(ctx, util.GenerateCacheKey(userPrefix, id), allUsersKey)
return nil
}
The invalidation strategy is straightforward but important. RegisterUser only invalidates user:all because a new user was added to the list. UpdateUser and DeleteUserByID invalidate both user:{id} and user:all in a single Delete call because both the individual record and the list are now stale. We never update the cache in-place on writes — we always delete and let the next read repopulate it, which avoids any risk of serving a partially updated cached value.
Handler
The handler’s job is simple — parse the request, call the service, map errors to HTTP status codes, and return the response. It has no knowledge of Redis at all.
// handler/user.go
package handler
import (
"errors"
"net/http"
"github.com/gin-gonic/gin"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/domain"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/port"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/util"
)
type UserHandler struct {
svc port.UserService
}
func NewUserHandler(svc port.UserService) *UserHandler {
return &UserHandler{svc: svc}
}
func (h *UserHandler) RegisterUser(c *gin.Context) {
var user domain.RegisterRequest
if err := c.ShouldBindJSON(&user); err != nil {
c.JSON(http.StatusBadRequest, util.ErrorResponse{Error: err.Error()})
return
}
resp, err := h.svc.RegisterUser(c.Request.Context(), &domain.User{
Email: user.Email,
Password: user.Password,
Name: user.Name,
})
if err != nil {
if errors.Is(err, domain.ErrDuplicateEmail) {
c.JSON(http.StatusConflict, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusCreated, resp)
}
func (h *UserHandler) GetUserByID(c *gin.Context) {
id, err := util.ParseID(c)
if err != nil {
c.JSON(http.StatusBadRequest, util.ErrorResponse{Error: "invalid user id"})
return
}
resp, err := h.svc.GetUserByID(c.Request.Context(), id)
if err != nil {
if errors.Is(err, domain.ErrNotFound) {
c.JSON(http.StatusNotFound, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusOK, resp)
}
func (h *UserHandler) GetAllUsers(c *gin.Context) {
resp, err := h.svc.GetAllUsers(c.Request.Context())
if err != nil {
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusOK, resp)
}
func (h *UserHandler) UpdateUser(c *gin.Context) {
id, err := util.ParseID(c)
if err != nil {
c.JSON(http.StatusBadRequest, util.ErrorResponse{Error: "invalid user id"})
return
}
var updates domain.User
if err := c.ShouldBindJSON(&updates); err != nil {
c.JSON(http.StatusBadRequest, util.ErrorResponse{Error: err.Error()})
return
}
resp, err := h.svc.UpdateUser(c.Request.Context(), id, &updates)
if err != nil {
switch {
case errors.Is(err, domain.ErrNotFound):
c.JSON(http.StatusNotFound, util.ErrorResponse{Error: err.Error()})
case errors.Is(err, domain.ErrDuplicateEmail):
c.JSON(http.StatusConflict, util.ErrorResponse{Error: err.Error()})
default:
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
}
return
}
c.JSON(http.StatusOK, resp)
}
func (h *UserHandler) DeleteUserByID(c *gin.Context) {
id, err := util.ParseID(c)
if err != nil {
c.JSON(http.StatusBadRequest, util.ErrorResponse{Error: "invalid user id"})
return
}
if err := h.svc.DeleteUserByID(c.Request.Context(), id); err != nil {
if errors.Is(err, domain.ErrNotFound) {
c.JSON(http.StatusNotFound, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
c.Status(http.StatusNoContent)
}
The handler depends only on port.UserService — an interface, not a concrete struct. This means the handler has no coupling to either Postgres or Redis. The entire caching layer could be removed or replaced without changing a single line in the handler.
This is the core benefit of hexagonal architecture applied to caching: Redis is an infrastructure detail that lives entirely within the service implementation, invisible to everything above it.
Auth — Token Rotation with Redis
The auth layer handles login, token refresh, and logout. This is where Redis plays a different role compared to the user and job caching we saw earlier — instead of caching database query results to improve read performance, here Redis acts as a token store to enable true server-side session control.
The Problem with Stateless JWTs
JWTs are stateless by design — once issued, a token is valid until it expires, and the server has no way to revoke it. This is fine for short-lived access tokens, but becomes a problem for refresh tokens which can live for days or weeks. If a refresh token is stolen or a user logs out, you want that token to be immediately invalid — not still usable for the next 7 days.
The solution is to store refresh tokens in Redis. On login, the refresh token is written to Redis. On every refresh or logout request, Redis is checked first. If the token isn’t in Redis, it’s rejected — even if the JWT signature itself is still technically valid.
Domain
// domain/jwt.go
package domain
import "github.com/golang-jwt/jwt/v5"
type TokenType string
const (
AccessToken TokenType = "access_token"
RefreshToken TokenType = "refresh_token"
)
type JWTClaims struct {
UserID uint `json:"user_id"`
Role Role `json:"role"`
jwt.RegisteredClaims
}
type LoginRequest struct {
Email string `json:"email" binding:"required,email"`
Password string `json:"password" binding:"required,min=6"`
}
type LoginResponse struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
}
JWTClaims embeds jwt.RegisteredClaims which carries the standard JWT fields like ExpiresAt. We add UserID and Role as custom claims so every authenticated request has access to who the user is and what they're allowed to do — without needing a database lookup on every request.
LoginResponse returns both tokens in the response body. The handler then also sets them as httpOnly cookies, giving flexibility for both browser clients (which use cookies automatically) and API clients (which can use the response body).
Service — Redis as a Token Store
// service/auth.go
package service
import (
"context"
"github.com/yehezkiel1086/go-redis-job-listings/internal/adapter/config"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/domain"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/port"
"github.com/yehezkiel1086/go-redis-job-listings/internal/core/util"
)
const refreshTokenPrefix = "refresh_token"
type AuthService struct {
conf *config.JWT
userRepo port.UserRepository
cache port.CacheRepository
}
func NewAuthService(conf *config.JWT, userRepo port.UserRepository, cache port.CacheRepository) *AuthService {
return &AuthService{conf: conf, userRepo: userRepo, cache: cache}
}
The cache key for each refresh token follows the pattern refresh_token:{token_string} — the token itself is part of the key. This means each user session gets its own entry, and a user can have multiple active sessions simultaneously (e.g. logged in on both mobile and desktop).
Login — issuing and storing tokens:
func (s *AuthService) Login(ctx context.Context, email, password string) (*domain.LoginResponse, error) {
user, err := s.userRepo.GetUserByEmail(ctx, email)
if err != nil {
return nil, domain.ErrInvalidCredentials
}
if err := util.ComparePassword([]byte(user.Password), []byte(password)); err != nil {
return nil, domain.ErrInvalidCredentials
}
accessToken, err := util.GenerateJWTToken(domain.AccessToken, s.conf, user)
if err != nil {
return nil, domain.ErrInternalServerError
}
refreshToken, err := util.GenerateJWTToken(domain.RefreshToken, s.conf, user)
if err != nil {
return nil, domain.ErrInternalServerError
}
value, err := util.Serialize(user.ID)
if err != nil {
return nil, domain.ErrInternalServerError
}
if err := s.cache.Set(ctx, util.GenerateCacheKey(refreshTokenPrefix, refreshToken), value, util.RefreshTokenTTL(s.conf)); err != nil {
return nil, domain.ErrInternalServerError
}
return &domain.LoginResponse{
AccessToken: accessToken,
RefreshToken: refreshToken,
}, nil
}
After validating credentials and generating both tokens, we serialize the user’s ID and store it in Redis under refresh_token:{token} with a TTL that matches the token's expiry duration. The TTL is critical — it means Redis will automatically clean up expired tokens, so we never need a manual cleanup job.
Unlike user and job caching where Redis errors are silently swallowed, here a Redis failure on Set returns an error to the client. This is intentional: if we can't store the refresh token, the token has no server-side record and would be immediately rejected on the next refresh attempt. It's better to fail the login than to issue a token that can never be refreshed.
Refresh — token rotation:
func (s *AuthService) Refresh(ctx context.Context, rawRefreshToken string) (*domain.LoginResponse, error) {
claims, err := util.ParseJWTToken(domain.RefreshToken, s.conf, rawRefreshToken)
if err != nil {
return nil, domain.ErrInvalidToken
}
key := util.GenerateCacheKey(refreshTokenPrefix, rawRefreshToken)
if _, err := s.cache.Get(ctx, key); err != nil {
return nil, domain.ErrInvalidToken
}
user, err := s.userRepo.GetUserByID(ctx, claims.UserID)
if err != nil {
return nil, domain.ErrInvalidToken
}
// Rotate: revoke old token, issue new pair.
_ = s.cache.Delete(ctx, key)
accessToken, err := util.GenerateJWTToken(domain.AccessToken, s.conf, user)
if err != nil {
return nil, domain.ErrInternalServerError
}
newRefreshToken, err := util.GenerateJWTToken(domain.RefreshToken, s.conf, user)
if err != nil {
return nil, domain.ErrInternalServerError
}
value, err := util.Serialize(user.ID)
if err != nil {
return nil, domain.ErrInternalServerError
}
if err := s.cache.Set(ctx, util.GenerateCacheKey(refreshTokenPrefix, newRefreshToken), value, util.RefreshTokenTTL(s.conf)); err != nil {
return nil, domain.ErrInternalServerError
}
return &domain.LoginResponse{
AccessToken: accessToken,
RefreshToken: newRefreshToken,
}, nil
}
Refresh performs two validation steps before issuing new tokens. First it verifies the JWT signature and expiry via ParseJWTToken. Then it checks Redis — if the token isn't there, it's rejected even if the JWT itself is valid. This is the server-side revocation in action.
After validation, we implement refresh token rotation: the old token is deleted from Redis and a completely new token pair is issued. This limits the window of exposure if a refresh token is ever compromised — once it’s been used to refresh, it’s gone. The delete error is swallowed here since the goal is still to issue new tokens even if the old delete fails.
Logout — true revocation:
func (s *AuthService) Logout(ctx context.Context, rawRefreshToken string) error {
key := util.GenerateCacheKey(refreshTokenPrefix, rawRefreshToken)
if _, err := s.cache.Get(ctx, key); err != nil {
return domain.ErrInvalidToken
}
return s.cache.Delete(ctx, key)
}
Logout is just two Redis operations: verify the token exists, then delete it. From this point on, any attempt to use the refresh token will fail the cache.Get check in Refresh and be rejected. No database involved, no JWT blacklist needed — just a Redis delete.
This is the complete lifecycle of a refresh token in Redis:
Login
└── cache.Set("refresh_token:{token}", userID, TTL) → token is active
Refresh
├── cache.Get("refresh_token:{old_token}") → verify token is active
├── cache.Delete("refresh_token:{old_token}") → revoke old token
└── cache.Set("refresh_token:{new_token}", userID, TTL) → issue new token
Logout
├── cache.Get("refresh_token:{token}") → verify token is active
└── cache.Delete("refresh_token:{token}") → revoke permanently
Handler
// handler/auth.go
func (h *AuthHandler) Login(c *gin.Context) {
var req domain.LoginRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, util.ErrorResponse{Error: err.Error()})
return
}
resp, err := h.svc.Login(c.Request.Context(), req.Email, req.Password)
if err != nil {
if errors.Is(err, domain.ErrInvalidCredentials) {
c.JSON(http.StatusUnauthorized, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
accessTokenDuration, _ := strconv.Atoi(h.conf.AccessTokenDuration)
refreshTokenDuration, _ := strconv.Atoi(h.conf.RefreshTokenDuration)
c.SetCookie(string(domain.AccessToken), resp.AccessToken, accessTokenDuration*60, "/api/v1", "", false, true)
c.SetCookie(string(domain.RefreshToken), resp.RefreshToken, refreshTokenDuration*60*60*24, "/api/v1/refresh", "", false, true)
c.SetCookie(string(domain.RefreshToken), resp.RefreshToken, refreshTokenDuration*60*60*24, "/api/v1/logout", "", false, true)
c.JSON(http.StatusOK, resp)
}
func (h *AuthHandler) Refresh(c *gin.Context) {
refreshToken, err := c.Cookie(string(domain.RefreshToken))
if err != nil {
c.JSON(http.StatusUnauthorized, util.ErrorResponse{Error: domain.ErrUnauthorized.Error()})
return
}
resp, err := h.svc.Refresh(c.Request.Context(), refreshToken)
if err != nil {
if errors.Is(err, domain.ErrInvalidToken) {
c.JSON(http.StatusUnauthorized, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
accessTokenDuration, _ := strconv.Atoi(h.conf.AccessTokenDuration)
refreshTokenDuration, _ := strconv.Atoi(h.conf.RefreshTokenDuration)
c.SetCookie(string(domain.AccessToken), resp.AccessToken, accessTokenDuration*60, "/api/v1", "", false, true)
c.SetCookie(string(domain.RefreshToken), resp.RefreshToken, refreshTokenDuration*60*60*24, "/api/v1/refresh", "", false, true)
c.SetCookie(string(domain.RefreshToken), resp.RefreshToken, refreshTokenDuration*60*60*24, "/api/v1/logout", "", false, true)
c.JSON(http.StatusOK, resp)
}
func (h *AuthHandler) Logout(c *gin.Context) {
refreshToken, err := c.Cookie(string(domain.RefreshToken))
if err != nil {
c.JSON(http.StatusUnauthorized, util.ErrorResponse{Error: domain.ErrUnauthorized.Error()})
return
}
if err := h.svc.Logout(c.Request.Context(), refreshToken); err != nil {
if errors.Is(err, domain.ErrInvalidToken) {
c.JSON(http.StatusUnauthorized, util.ErrorResponse{Error: err.Error()})
return
}
c.JSON(http.StatusInternalServerError, util.ErrorResponse{Error: err.Error()})
return
}
c.Status(http.StatusNoContent)
}
All three handlers read the refresh token from the httpOnly cookie rather than the request body. This is why the cookie is set on both /api/v1/refresh and /api/v1/logout paths during login — the browser will only send a cookie to paths it was scoped to, so the refresh token cookie needs to be explicitly scoped to both paths where it will be used.
The access token cookie is scoped to /api/v1 broadly, meaning it's sent on every authenticated request under that prefix. Combined with the httpOnly flag on both cookies, JavaScript running in the browser can never read either token directly — a meaningful defense against XSS-based token theft.
Job & Enrollment: REST API Redis Caching Strategy
Job — Filter-Aware Caching
The job service introduces a caching challenge that the user service didn’t have: filtered queries. GET /jobs?type=remote&location=Jakarta and GET /jobs?type=full_time are two different queries that return two different result sets — they can't share the same cache key.
The solution is to encode the filter parameters directly into the cache key:
const (
jobPrefix = "job"
allJobsPrefix = "job:all"
userJobsPrefix = "job:user"
jobCacheTTL = 5 * time.Minute
)
func (s *JobService) GetAllJobs(ctx context.Context, filter domain.JobFilter) ([]domain.JobResponse, error) {
key := util.GenerateCacheKey(allJobsPrefix, util.GenerateCacheKeyParams(
string(filter.Type),
string(filter.ExperienceLevel),
filter.Location,
filter.Search,
filter.IsActive,
))
// key example: "job:all:remote-senior-Jakarta-go-<nil>"
if cached, err := s.cache.Get(ctx, key); err == nil {
var responses []domain.JobResponse
if err := util.Deserialize([]byte(cached), &responses); err == nil {
return responses, nil
}
}
jobs, err := s.repo.GetAllJobs(ctx, filter)
// ... build responses, populate cache
}
Each unique filter combination gets its own cache entry. The tradeoff is that we can’t target individual keys for invalidation on writes — we don’t know which filter combinations are currently cached. This is where DeleteByPrefix becomes essential:
func (s *JobService) CreateJob(ctx context.Context, userID uint, req *domain.CreateJobRequest) (*domain.JobResponse, error) {
created, err := s.repo.CreateJob(ctx, job)
if err != nil {
return nil, err
}
// wipe ALL filter variants and the posting user's job list.
_ = s.cache.DeleteByPrefix(ctx, allJobsPrefix)
_ = s.cache.Delete(ctx, util.GenerateCacheKey(userJobsPrefix, userID))
return toJobResponse(created), nil
}
DeleteByPrefix scans Redis for all keys starting with job:all and deletes them in one sweep. Updates and deletes follow the same pattern, but also invalidate the individual job entry:
func (s *JobService) UpdateJob(ctx context.Context, id uint, userID uint, role domain.Role, req *domain.UpdateJobRequest) (*domain.JobResponse, error) {
// ... ownership check and field updates
_ = s.cache.Delete(ctx, util.GenerateCacheKey(jobPrefix, id)) // "job:42"
_ = s.cache.DeleteByPrefix(ctx, allJobsPrefix) // "job:all:*"
_ = s.cache.Delete(ctx, util.GenerateCacheKey(userJobsPrefix, job.UserID)) // "job:user:5"
return toJobResponse(updated), nil
}
Three cache regions are always invalidated together on any write: the individual job entry, all filter-variant list caches, and the job owner’s personal job list.
Enrollment — Multi-Dimensional Invalidation
Enrollments are the most cache-complex resource because a single enrollment record is referenced by three different cache dimensions simultaneously — by enrollment ID, by the applicant’s user ID, and by the job ID it belongs to.
const (
enrollmentPrefix = "enrollment" // enrollment:{id}
myEnrollmentsPrefix = "enrollment:user" // enrollment:user:{userID}
jobEnrollmentsPrefix = "enrollment:job" // enrollment:job:{jobID}
enrollmentCacheTTL = 5 * time.Minute
)
Every write operation must invalidate all three dimensions at once. The key insight is that we fetch the enrollment record before the delete so we have access to both UserID and JobID for cache key construction:
func (s *EnrollmentService) DeleteEnrollmentByID(ctx context.Context, id uint, requesterID uint, role domain.Role) error {
// fetch first to get UserID and JobID for cache invalidation.
enrollment, err := s.repo.GetEnrollmentByID(ctx, id)
if err != nil {
return domain.ErrNotFound
}
if role != domain.RoleAdmin && enrollment.UserID != requesterID {
return domain.ErrForbidden
}
if err := s.repo.DeleteEnrollmentByID(ctx, id); err != nil {
return err
}
// all three dimensions invalidated in one Delete call.
_ = s.cache.Delete(ctx,
util.GenerateCacheKey(enrollmentPrefix, id), // enrollment:42
util.GenerateCacheKey(myEnrollmentsPrefix, enrollment.UserID), // enrollment:user:7
util.GenerateCacheKey(jobEnrollmentsPrefix, enrollment.JobID), // enrollment:job:3
)
return nil
}
The same three-key invalidation pattern applies to UpdateEnrollmentStatus — when a job owner accepts or rejects an application, the individual record, the applicant's list, and the job's applicant list all become stale simultaneously:
func (s *EnrollmentService) UpdateEnrollmentStatus(...) (*domain.EnrollmentResponse, error) {
// ... ownership check and status update
_ = s.cache.Delete(ctx,
util.GenerateCacheKey(enrollmentPrefix, id),
util.GenerateCacheKey(myEnrollmentsPrefix, enrollment.UserID),
util.GenerateCacheKey(jobEnrollmentsPrefix, enrollment.JobID),
)
return toEnrollmentResponse(updated), nil
}
Here’s the complete cache key map across all three services for reference:
Users
user:{id} → single user (5 min TTL)
user:all → all users list (5 min TTL)
Jobs
job:{id} → single job (5 min TTL)
job:all:{params} → filtered job list, one key per filter combo (5 min TTL)
job:user:{userID} → jobs posted by a specific user (5 min TTL)
Enrollments
enrollment:{id} → single enrollment (5 min TTL)
enrollment:user:{id} → all enrollments by a user (5 min TTL)
enrollment:job:{id} → all enrollments for a job (5 min TTL)
Auth
refresh_token:{token} → active session (TTL matches token expiry)
One design decision worth noting: auth refresh tokens use a TTL that matches the actual token expiry, so Redis cleans them up automatically. All other cache entries use a fixed 5-minute TTL — short enough that stale data is a minor concern, long enough to absorb bursts of repeated reads on popular resources.
Wrapping Up
We’ve covered a lot of ground in this article. Starting from a clean hexagonal architecture, we built a full job listings API with user authentication, job posting, and enrollment features — then layered in a Redis caching strategy that touches every part of the application.
Here’s a quick recap of everything we implemented:
Cache-aside reads on every GET operation — check Redis first, fall back to PostgreSQL on a miss, then populate the cache for next time. If Redis is unavailable, the service degrades gracefully to database reads without returning errors to the client.
Targeted cache invalidation on every write — individual keys, list keys, and cross-dimensional keys are all invalidated together in a single Delete call the moment the underlying data changes. No stale data, no cache poisoning.
Filter-aware cache keys for job listings — each unique combination of query parameters generates its own cache key, so filtered results are cached independently. DeleteByPrefix sweeps all variants clean on any write.
Redis as a token store for auth — refresh tokens are stored in Redis with a TTL matching their expiry, enabling true server-side session revocation on logout and refresh token rotation on every refresh request.
Hexagonal architecture keeping Redis invisible to the core — every service depends only on the CacheRepository interface. The handlers, domain models, and port definitions have zero knowledge of Redis. Swap it out for Memcached, an in-memory map, or remove it entirely — nothing outside the service layer changes.
The result is a backend that handles repeated reads without hammering the database, supports real session management without a dedicated session server, and stays maintainable because the caching concern is cleanly contained within the service layer.
The complete source code for everything covered in this article is available on GitHub: github.com/yehezkiel1086/go-redis-job-listings
Feel free to clone it, open issues, or submit pull requests if you spot something to improve.