Building Microservices-based Email Notification System in Go with RabbitMQ
Implementation of a microservice architecture using Go which demonstrates how to handle a user registration process, JWT authentication, and asynchronous email confirmation where a RESTful API service communicates with a background worker service via a message queue (RabbitMQ).
Modern applications rarely live in isolation — they send emails, trigger notifications, and react to user actions in real time. The traditional approach to handling these side effects is to do everything in a single HTTP request: the client registers, the server hashes the password, writes to the database, and sends a confirmation email — all before returning a response. This works, until it doesn’t. A slow SMTP server, a network blip, or a spike in registrations can turn a simple sign-up into a timeout.
In this blog, we will build a user registration system in Go that decouples email delivery from the HTTP lifecycle using RabbitMQ. When a user registers, the user-service publishes a confirmation event to a message queue and immediately returns a response. A separate notification-service consumes that event and handles the email send independently — so a slow inbox never slows down your API. Here’s the full implementation in Github: github.com/yehezkiel1086/go-rabbitmq-email-notification
We will build two services: a user-service that handles registration, email confirmation, and JWT-based authentication using Gin and PostgreSQL, and a notif-service that consumes messages from RabbitMQ and delivers confirmation emails via Gmail SMTP. The two services communicate exclusively through a durable RabbitMQ queue, making the architecture resilient to restarts and partial failures.
If you are not familiar with the tools we will be using — RabbitMQ is a message broker that lets services communicate asynchronously by passing messages through queues, Gin is a lightweight HTTP framework for Go, and GORM is a Go ORM that simplifies database interactions with PostgreSQL.
Prerequisites
- Familiarity with Go, REST APIs, and basic SQL
- Docker and Docker Compose (for PostgreSQL and RabbitMQ)
- A Gmail account with an App Password configured for SMTP
Getting Started
Before writing any service code, let’s set up the project structure, shared configuration, and infrastructure. Since we are building two independent microservices, the repository is organized as a monorepo — both services live in the same repo but are completely separate Go modules with their own dependencies, entry points, and internal packages.
The final project structure looks like this:
go-rabbitmq-email-notification/
├── user-service/
│ ├── cmd/http/main.go
│ └── internal/
│ ├── adapter/
│ │ ├── config/
│ │ ├── handler/
│ │ └── storage/
│ │ ├── postgres/
│ │ │ └── repository/
│ │ └── rabbitmq/
│ └── core/
│ ├── domain/
│ ├── port/
│ ├── service/
│ └── util/
├── notif-service/
│ ├── cmd/amqp/main.go
│ └── internal/
│ ├── adapter/
│ │ ├── config/
│ │ ├── handler/
│ │ └── storage/
│ │ └── rabbitmq/
│ └── core/
│ ├── port/
│ ├── service/
│ └── util/
├── .env
├── docker-compose.yml
└── Taskfile.yml
The user-service exposes an HTTP API (via Gin) for registration, login, and email confirmation. It writes users to PostgreSQL and publishes confirmation events to RabbitMQ. The notif-service has no HTTP API at all — it runs as a long-lived AMQP consumer, receives messages from the same queue, and sends confirmation emails via Gmail SMTP.
Initializing the Go modules
Since these are two separate services, each gets its own go.mod. Run these from the repo root:
mkdir -p user-service notif-service
cd user-service && go mod init github.com/<your-username>/go-rabbitmq-email-notification/user-service
cd ../notif-service && go mod init github.com/<your-username>/go-rabbitmq-email-notification/notif-service
Then install dependencies for each service:
# user-service
cd user-service
go get github.com/gin-gonic/gin
go get gorm.io/gorm
go get gorm.io/driver/postgres
go get github.com/rabbitmq/amqp091-go
go get github.com/golang-jwt/jwt/v5
go get github.com/joho/godotenv
go get golang.org/x/crypto
# notif-service
cd ../notif-service
go get github.com/rabbitmq/amqp091-go
go get gopkg.in/gomail.v2
go get github.com/joho/godotenv
Environment variables
Both services share a single .env file at the repo root. Create it with the following:
APP_NAME=go-rabbitmq-email-notification
APP_ENVIRONMENT=development
# user-service HTTP server
USER_HTTP_HOST=127.0.0.1
USER_HTTP_PORT=8080
# notif-service AMQP listener
NOTIF_AMQP_HOST=127.0.0.1
NOTIF_AMQP_PORT=8081
# RabbitMQ broker
RABBITMQ_HOST=127.0.0.1
RABBITMQ_PORT=5672
RABBITMQ_USER=rabbitmq
RABBITMQ_PASSWORD=admin
RABBITMQ_NAME=email_confirm
# Gmail SMTP
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_SENDER_EMAIL=your-email@gmail.com
SMTP_APP_PASSWORD=your-app-password # Google account → Security → App Passwords
# PostgreSQL
DB_HOST=127.0.0.1
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=admin
DB_NAME=email_notif
# JWT
JWT_SECRET=your-secret-here
SESSION_DURATION=15 # in minutes
For
_SMTP_APP_PASSWORD_, you need to generate an App Password from your Google account under Security → 2-Step Verification → App Passwords. Your main Gmail password will not work here.
Infrastructure with Docker Compose
We use Docker Compose to run PostgreSQL and RabbitMQ locally. Both are lightweight Alpine images with named volumes so data persists between restarts.
services:
rabbitmq:
image: rabbitmq:3.13-management-alpine
volumes:
- ./rabbitmq_data:/var/lib/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASSWORD}
ports:
- "5672:5672"
- "15672:15672" # management UI at http://localhost:15672
container_name: rabbitmq
postgres:
image: postgres:17-alpine
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
ports:
- "5432:5432"
container_name: postgres
volumes:
postgres-data:
driver: local
rabbitmq_data:
driver: local
Start the infrastructure with:
task compose:up
Once RabbitMQ is running, the management UI is available at http://localhost:15672 with the credentials from your .env. This is useful for inspecting queues and monitoring message flow during development.
Task runner
We use Taskfile as a make-like task runner. Install it with brew install go-task (macOS) or see the official install guide. The relevant tasks are:
version: '3'
dotenv:
- .env
tasks:
compose:up:
desc: "run docker containers"
cmd: docker compose up -d
compose:down:
desc: "stop running docker containers"
cmd: docker compose down
db:cli:
desc: "access postgres db cli"
cmd: docker exec -it postgres psql -U {{ .DB_USER }} -d {{ .DB_NAME }}
dev:user:
desc: "run user service in development mode"
cmd: go run user-service/cmd/http/main.go
dev:notif:
desc: "run notification service in development mode"
cmd: go run notif-service/cmd/amqp/main.go
Run each service in a separate terminal:
task dev:user # starts the HTTP server on :8080
task dev:notif # starts the AMQP consumer
Config loading
Each service loads its config from the shared .env using godotenv. In production, the env vars are expected to be injected directly (e.g. by Kubernetes or a secrets manager), so godotenv.Load() is skipped when APP_ENV=production.
The user-service config bundles HTTP, PostgreSQL, RabbitMQ, and JWT settings:
func New() (*Container, error) {
if os.Getenv("APP_ENV") != "production" {
if err := godotenv.Load(); err != nil {
return nil, fmt.Errorf("unable to load .env: %w", err)
}
}
return &Container{
App: &App{Name: os.Getenv("APP_NAME"), Env: os.Getenv("APP_ENV")},
HTTP: &HTTP{Host: os.Getenv("USER_HTTP_HOST"), Port: os.Getenv("USER_HTTP_PORT")},
RabbitMQ: &RabbitMQ{
Host: os.Getenv("RABBITMQ_HOST"), Port: os.Getenv("RABBITMQ_PORT"),
User: os.Getenv("RABBITMQ_USER"), Password: os.Getenv("RABBITMQ_PASSWORD"),
},
DB: &DB{
Host: os.Getenv("DB_HOST"), Port: os.Getenv("DB_PORT"),
User: os.Getenv("DB_USER"), Password: os.Getenv("DB_PASSWORD"),
Name: os.Getenv("DB_NAME"),
},
JWT: &JWT{Secret: os.Getenv("JWT_SECRET"), Duration: os.Getenv("SESSION_DURATION")},
}, nil
}
The notif-service config is simpler — no database, no JWT, just RabbitMQ and SMTP:
func New() (*Container, error) {
if os.Getenv("APP_ENV") != "production" {
if err := godotenv.Load(); err != nil {
return nil, err
}
}
return &Container{
App: &App{Name: os.Getenv("APP_NAME"), Env: os.Getenv("APP_ENV")},
RabbitMQ: &RabbitMQ{
Host: os.Getenv("RABBITMQ_HOST"), Port: os.Getenv("RABBITMQ_PORT"),
User: os.Getenv("RABBITMQ_USER"), Password: os.Getenv("RABBITMQ_PASSWORD"),
},
SMTP: &SMTP{
Host: os.Getenv("SMTP_HOST"), Port: os.Getenv("SMTP_PORT"),
SenderEmail: os.Getenv("SMTP_SENDER_EMAIL"), Password: os.Getenv("SMTP_APP_PASSWORD"),
},
}, nil
}
With the infrastructure running and configs in place, we’re ready to build the services. We’ll start with the user-service in the next section.
Here’s the complete user-service/internal/adapter/config/config.go:
package config
import (
"fmt"
"os"
"github.com/joho/godotenv"
)
type (
Container struct {
App *App
HTTP *HTTP
RabbitMQ *RabbitMQ
DB *DB
JWT *JWT
}
App struct {
Name string
Env string
}
HTTP struct {
Host string
Port string
}
RabbitMQ struct {
Host string
Port string
User string
Password string
}
DB struct {
Host string
Port string
User string
Password string
Name string
}
JWT struct {
Secret string
Duration string
}
)
func New() (*Container, error) {
if os.Getenv("APP_ENV") != "production" {
err := godotenv.Load()
if err != nil {
errMsg := fmt.Errorf("unable to load .env: %v", err.Error())
return nil, errMsg
}
}
App := &App{
Name: os.Getenv("APP_NAME"),
Env: os.Getenv("APP_ENV"),
}
HTTP := &HTTP{
Host: os.Getenv("USER_HTTP_HOST"),
Port: os.Getenv("USER_HTTP_PORT"),
}
Rabbitmq := &RabbitMQ{
Host: os.Getenv("RABBITMQ_HOST"),
Port: os.Getenv("RABBITMQ_PORT"),
User: os.Getenv("RABBITMQ_USER"),
Password: os.Getenv("RABBITMQ_PASSWORD"),
}
DB := &DB{
Host: os.Getenv("DB_HOST"),
Port: os.Getenv("DB_PORT"),
User: os.Getenv("DB_USER"),
Password: os.Getenv("DB_PASSWORD"),
Name: os.Getenv("DB_NAME"),
}
JWT := &JWT{
Secret: os.Getenv("JWT_SECRET"),
Duration: os.Getenv("SESSION_DURATION"),
}
return &Container{
App: App,
HTTP: HTTP,
RabbitMQ: Rabbitmq,
DB: DB,
JWT: JWT,
}, nil
}
And here’s the complete notif-service/internal/adapter/config/config.go:
package config
import (
"os"
"github.com/joho/godotenv"
)
type (
Container struct {
App *App
AMQP *AMQP
RabbitMQ *RabbitMQ
SMTP *SMTP
}
App struct {
Name string
Env string
}
AMQP struct {
Host string
Port string
}
RabbitMQ struct {
Host string
Port string
User string
Password string
}
SMTP struct {
Host string
Port string
SenderEmail string
Password string
}
)
func New() (*Container, error) {
if os.Getenv("APP_ENV") != "production" {
err := godotenv.Load()
if err != nil {
return nil, err
}
}
App := &App{
Name: os.Getenv("APP_NAME"),
Env: os.Getenv("APP_ENV"),
}
AMQP := &AMQP{
Host: os.Getenv("NOTIF_AMQP_HOST"),
Port: os.Getenv("NOTIF_AMQP_PORT"),
}
Rabbitmq := &RabbitMQ{
Host: os.Getenv("RABBITMQ_HOST"),
Port: os.Getenv("RABBITMQ_PORT"),
User: os.Getenv("RABBITMQ_USER"),
Password: os.Getenv("RABBITMQ_PASSWORD"),
}
SMTP := &SMTP{
Host: os.Getenv("SMTP_HOST"),
Port: os.Getenv("SMTP_PORT"),
SenderEmail: os.Getenv("SMTP_SENDER_EMAIL"),
Password: os.Getenv("SMTP_APP_PASSWORD"),
}
return &Container{
App: App,
AMQP: AMQP,
RabbitMQ: Rabbitmq,
SMTP: SMTP,
}, nil
}
User Service
The user-service is an HTTP API responsible for three things: registering users, authenticating them with JWT, and confirming their email via a token. It follows a layered architecture — handler → service → repository — with each layer depending on interfaces defined in the port package rather than concrete types. This keeps the business logic in service/ free from HTTP or database concerns.
Let’s start from the bottom up.
Domain
The domain package defines the core data structures shared across all layers. The central entity is User:
type User struct {
gorm.Model
Email string `json:"email" gorm:"size:255;not null;unique"`
Password string `json:"password" gorm:"size:255;not null"`
Name string `json:"name" gorm:"size:255;not null"`
Role Role `json:"role" gorm:"default:2001;not null"`
ConfirmationToken string `json:"confirmation_token" gorm:"size:255"`
IsVerified bool `json:"is_verified" gorm:"default:false"`
}
Two fields are worth noting: ConfirmationToken stores a randomly generated token that is emailed to the user after registration, and IsVerified is flipped to true once they click the confirmation link. The Role type is a uint16 with two named constants — AdminRole (5150) and UserRole (2001) — used by the JWT middleware for authorization.
For API boundaries we define separate request and response types. UserResponse deliberately does not embed gorm.Model — it is a plain DTO with only the fields the client should ever see:
type UserResponse struct {
ID uint `json:"id"`
Email string `json:"email"`
Name string `json:"name"`
Role Role `json:"role"`
}
Embedding gorm.Model in a response type would leak DeletedAt and other internal fields into the JSON, and would also confuse GORM when the type is used in queries.
We also define sentinel errors here so all layers can reference them by value rather than by string:
var (
ErrNotFound = errors.New("resource not found")
ErrUnauthorized = errors.New("unauthorized")
ErrInternal = errors.New("internal server error")
)
The JWT claims type lives in domain/jwt.go and embeds jwt.RegisteredClaims alongside our custom fields:
type JWTClaims struct {
Email string `json:"email"`
Role Role `json:"role"`
jwt.RegisteredClaims
}
Port (Interfaces)
The port package defines the contracts between layers. Both the repository and service are defined as interfaces, which means the handler only depends on port.UserService and port.AuthService — never on a concrete struct. This makes the code easy to test and swap out:
type UserRepository interface {
CreateUser(ctx context.Context, user *domain.User) (*domain.UserResponse, error)
GetUserByEmail(ctx context.Context, email string) (*domain.User, error)
GetUserByToken(ctx context.Context, token string) (*domain.User, error)
GetUsers(ctx context.Context) ([]domain.UserResponse, error)
UpdateUser(ctx context.Context, user *domain.User) (*domain.User, error)
}
type UserService interface {
RegisterUser(ctx context.Context, user *domain.User) (*domain.UserResponse, error)
ConfirmEmail(ctx context.Context, token string) (*domain.User, error)
GetUsers(ctx context.Context) ([]domain.UserResponse, error)
}
type AuthService interface {
Login(ctx context.Context, email, password string) (string, error)
}
Repository
The repository layer is the only place that talks to PostgreSQL. It receives a *postgres.DB wrapper and calls GORM methods on it. Every method passes the context down via WithContext(ctx) — this ensures database calls respect cancellation and timeouts from the request context.
One subtlety worth calling out is in GetUsers. It uses Scan rather than Find:
func (ur *UserRepository) GetUsers(ctx context.Context) ([]domain.UserResponse, error) {
db := ur.db.GetDB()
var users []domain.UserResponse
if err := db.WithContext(ctx).Model(&domain.User{}).Scan(&users).Error; err != nil {
return nil, err
}
return users, nil
}
Find is designed for GORM model types — passing it a plain DTO struct produces incorrect or empty results. Scan projects query columns onto any struct, making it the right tool when the target type is not a GORM model.
Another subtle fix is in GetUserByEmail and GetUserByToken: the local variable is declared as a value type (var user domain.User) rather than a pointer (var user *domain.User). Passing a nil pointer to GORM's First is undefined behavior — always declare the variable as a value and pass its address:
func (ur *UserRepository) GetUserByEmail(ctx context.Context, email string) (*domain.User, error) {
db := ur.db.GetDB()
var user domain.User
if err := db.WithContext(ctx).Where("email = ?", email).First(&user).Error; err != nil {
return nil, err
}
return &user, nil
}
RabbitMQ Producer
Before getting to the service layer, let’s look at the RabbitMQ adapter since UserService depends on it directly.
The Rabbitmq struct wraps an amqp.Connection and an amqp.Channel. Initialization includes a retry loop — this is not optional. In any Docker Compose setup, the Go process starts in milliseconds while RabbitMQ takes several seconds to be ready. Without retries, the service will simply fail to start every time:
const (
maxRetries = 5
retryInterval = 3 * time.Second
)
func New(conf *config.RabbitMQ) (*Rabbitmq, error) {
uri := fmt.Sprintf("amqp://%s:%s@%s:%s/", conf.User, conf.Password, conf.Host, conf.Port)
var (
conn *amqp.Connection
err error
)
for i := range maxRetries {
conn, err = amqp.Dial(uri)
if err == nil {
break
}
log.Printf("rabbitmq: connection attempt %d/%d failed: %v", i+1, maxRetries, err)
time.Sleep(retryInterval)
}
if err != nil {
return nil, fmt.Errorf("rabbitmq: failed to connect after %d attempts: %w", maxRetries, err)
}
ch, err := conn.Channel()
if err != nil {
conn.Close()
return nil, fmt.Errorf("rabbitmq: failed to open channel: %w", err)
}
return &Rabbitmq{conn, ch}, nil
}
Queue declaration uses durable: true. This is critical — a non-durable queue is wiped on every broker restart, silently dropping any unprocessed confirmation emails:
func (mq *Rabbitmq) DeclareQueue(name string) (*amqp.Queue, error) {
q, err := mq.ch.QueueDeclare(
name,
true, // durable: survives broker restarts
false,
false,
false,
nil,
)
if err != nil {
return nil, fmt.Errorf("rabbitmq: failed to declare queue %q: %w", name, err)
}
return &q, nil
}
Publishing sets DeliveryMode: amqp.Persistent alongside the durable queue declaration. Durability tells RabbitMQ to survive restarts; persistence tells it to write messages to disk so they also survive. You need both:
func (mq *Rabbitmq) SendJSON(q *amqp.Queue, data []byte) error {
ctx, cancel := context.WithTimeout(context.Background(), publishTimeout)
defer cancel()
err := mq.ch.PublishWithContext(ctx,
"",
q.Name,
false,
false,
amqp.Publishing{
ContentType: "application/json",
DeliveryMode: amqp.Persistent,
Body: data,
},
)
if err != nil {
return fmt.Errorf("rabbitmq: failed to publish to queue %q: %w", q.Name, err)
}
return nil
}
Finally, Close shuts down the channel first, then the connection. Closing the connection first forcefully tears down the channel underneath it, which can lose in-flight messages:
func (mq *Rabbitmq) Close() {
if err := mq.ch.Close(); err != nil {
log.Printf("rabbitmq: error closing channel: %v", err)
}
if err := mq.conn.Close(); err != nil {
log.Printf("rabbitmq: error closing connection: %v", err)
}
}
Important: If you already ran the service with
_durable: false_and the_email_confirm_queue exists in your broker, RabbitMQ will reject the new declaration with a_406 PRECONDITION_FAILED_error. Delete the existing queue first via the management UI at_http://localhost:15672_or with_docker exec -it rabbitmq rabbitmqadmin delete queue name=email_confirm_, then restart the service.
User Service
The service layer is where the registration flow lives. NewUserService declares the queue on startup — if this fails, the service refuses to start entirely, which is the right behavior since sending confirmation emails is not optional:
func NewUserService(repo port.UserRepository, mq *rabbitmq.Rabbitmq) (*UserService, error) {
q, err := mq.DeclareQueue("email_confirm")
if err != nil {
return nil, err
}
return &UserService{repo, mq, q}, nil
}
RegisterUser coordinates the full registration flow — hash password, generate token, persist user, publish event:
func (us *UserService) RegisterUser(ctx context.Context, user *domain.User) (*domain.UserResponse, error) {
hashedPwd, err := util.HashPassword(user.Password)
if err != nil {
return nil, err
}
user.Password = hashedPwd
user.ConfirmationToken, err = util.GenerateToken()
if err != nil {
return nil, err
}
createdUser, err := us.repo.CreateUser(ctx, user)
if err != nil {
return nil, err
}
confData := map[string]string{
"email": createdUser.Email,
"confirmation_token": user.ConfirmationToken,
}
confJson, err := util.Serialize(confData)
if err != nil {
return nil, err
}
if err := us.mq.SendJSON(us.q, confJson); err != nil {
return nil, err
}
return createdUser, nil
}
Notice that user.ConfirmationToken is used for the published message rather than anything from createdUser. UserResponse is a DTO that doesn't carry the token — the token stays on the original user struct that was passed into CreateUser. This is an easy mistake to make if you're not paying attention to which struct is which.
ConfirmEmail looks up the user by token, flips IsVerified, and clears the token so it can't be reused:
func (us *UserService) ConfirmEmail(ctx context.Context, token string) (*domain.User, error) {
user, err := us.repo.GetUserByToken(ctx, token)
if err != nil {
return nil, err
}
user.IsVerified = true
user.ConfirmationToken = ""
if _, err := us.repo.UpdateUser(ctx, user); err != nil {
return nil, err
}
return user, nil
}
Auth Service
AuthService handles login and JWT issuance. The login flow checks email, verified status, and password — but all three failure cases return the same domain.ErrUnauthorized. This is intentional: returning different errors for "user not found" vs "wrong password" would allow an attacker to enumerate valid email addresses:
func (as *AuthService) Login(ctx context.Context, email, password string) (string, error) {
user, err := as.userRepo.GetUserByEmail(ctx, email)
if err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return "", domain.ErrUnauthorized
}
return "", fmt.Errorf("auth: failed to look up user: %w", err)
}
if !user.IsVerified {
return "", domain.ErrUnauthorized
}
if err := util.CompareHashedPwd(user.Password, password); err != nil {
return "", domain.ErrUnauthorized
}
token, err := util.GenerateJWTToken(as.conf, user)
if err != nil {
return "", fmt.Errorf("auth: failed to generate token: %w", err)
}
return token, nil
}
JWT generation uses HS256 and embeds Email and Role into the claims. Both IssuedAt and ExpiresAt are set from the same time.Now() call so they share a consistent base. Duration comes from config as an integer number of minutes:
func GenerateJWTToken(conf *config.JWT, user *domain.User) (string, error) {
mySigningKey := []byte(conf.Secret)
duration, err := strconv.Atoi(conf.Duration)
if err != nil {
return "", fmt.Errorf("jwt: invalid duration config value %q: %w", conf.Duration, err)
}
now := time.Now()
claims := domain.JWTClaims{
Email: user.Email,
Role: user.Role,
RegisteredClaims: jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(now),
ExpiresAt: jwt.NewNumericDate(now.Add(time.Duration(duration) * time.Minute)),
},
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
return token.SignedString(mySigningKey)
}
Handlers
The handlers are thin — they bind the request, call the service, and map errors to HTTP status codes. The key pattern throughout is mapping sentinel errors with errors.Is to get the right status code rather than returning 500 for everything:
func (ah *AuthHandler) Login(c *gin.Context) {
var req LoginUserReq
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "email and password are required"})
return
}
token, err := ah.svc.Login(c.Request.Context(), req.Email, req.Password)
if err != nil {
if errors.Is(err, domain.ErrUnauthorized) {
c.JSON(http.StatusUnauthorized, gin.H{"error": domain.ErrUnauthorized.Error()})
return
}
c.JSON(http.StatusInternalServerError, gin.H{"error": domain.ErrInternal.Error()})
return
}
duration, err := strconv.Atoi(ah.conf.Duration)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": domain.ErrInternal.Error()})
return
}
c.SetCookie("jwt_token", token, duration*60, "/", "", false, true)
c.JSON(http.StatusOK, gin.H{"token": token})
}
One thing worth pointing out: all handler methods use c.Request.Context() rather than c directly. While *gin.Context does implement the context.Context interface, passing it as a context is semantically wrong — c.Request.Context() carries the actual request lifecycle and cancellation signal.
Similarly, error messages passed to gin.H are always plain strings — never errors.New(...). The error interface has no exported fields, so encoding/json marshals it to {}, which means the client receives {"error": {}} instead of {"error": "some message"}.
Middleware
AuthMiddleware accepts *config.JWT as a parameter rather than reading JWT_SECRET from os.Getenv directly. This keeps the middleware consistent with the rest of the codebase and prevents a misconfigured environment from silently producing an empty signing key — which would cause all tokens to either always fail or, worse, always succeed against "":
func AuthMiddleware(conf *config.JWT) gin.HandlerFunc {
return func(c *gin.Context) {
tokenString, err := c.Cookie("jwt_token")
if err != nil {
authHeader := c.GetHeader("Authorization")
tokenString, _ = strings.CutPrefix(authHeader, "Bearer ")
}
if tokenString == "" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
c.Abort()
return
}
token, err := jwt.ParseWithClaims(tokenString, &domain.JWTClaims{}, func(token *jwt.Token) (any, error) {
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, jwt.ErrSignatureInvalid
}
return []byte(conf.Secret), nil
})
// ...
}
}
The signing method check inside the key function — if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok — is a security requirement, not a nicety. Without it, an attacker can forge a valid token by sending a JWT with "alg": "none", bypassing signature verification entirely. Always validate the algorithm before returning the signing key.
RoleMiddleware is variadic — it accepts one or more domain.Role values and allows the request through if the authenticated user holds any of them. This lets you protect admin-only routes with RoleMiddleware(domain.AdminRole) while future routes could allow both roles with RoleMiddleware(domain.UserRole, domain.AdminRole).
Router
All routes are wired in NewRouter, which receives both *config.HTTP and *config.JWT so the JWT config can be threaded into AuthMiddleware:
func NewRouter(
conf *config.HTTP,
jwtConf *config.JWT,
userHandler *UserHandler,
authHandler *AuthHandler,
) *Router {
r := gin.New()
pb := r.Group("/api/v1")
ad := pb.Group("/", AuthMiddleware(jwtConf), RoleMiddleware(domain.AdminRole))
pb.POST("/login", authHandler.Login)
pb.POST("/register", userHandler.RegisterUser)
pb.GET("/confirm-email", userHandler.ConfirmEmail)
ad.GET("/users", userHandler.GetUsers)
return &Router{r}
}
Public routes (/login, /register, /confirm-email) are grouped under pb with no middleware. The admin group ad stacks AuthMiddleware and RoleMiddleware — a request to /users must carry a valid JWT and hold AdminRole, otherwise it is rejected before reaching the handler.
Wiring it together
main.go initializes everything in order and wires the dependencies manually — no DI framework, just constructor functions:
func main() {
conf, err := config.New()
handleError(err, "unable to load .env configs")
ctx := context.Background()
db, err := postgres.New(ctx, conf.DB)
handleError(err, "unable to connect to db")
err = db.Migrate(&domain.User{})
handleError(err, "unable to migrate db")
mq, err := rabbitmq.New(conf.RabbitMQ)
handleError(err, "rabbitmq connection failed")
defer mq.Close()
userRepo := repository.NewUserRepository(db)
userSvc, err := service.NewUserService(userRepo, mq)
handleError(err, "failed to init user service")
authSvc := service.NewAuthService(conf.JWT, userRepo)
r := handler.NewRouter(
conf.HTTP,
conf.JWT,
handler.NewUserHandler(userSvc),
handler.NewAuthHandler(conf.JWT, authSvc),
)
err = r.Serve(conf.HTTP)
handleError(err, "unable to start server")
}
The order matters: the database must be up before migrations run, and RabbitMQ must be connected before NewUserService attempts to declare the queue. The retry logic in rabbitmq.New handles the startup race with Docker Compose automatically.
With the user-service complete, we have a working HTTP API that registers users, issues JWTs, and publishes confirmation events to RabbitMQ. In the next section we’ll build the notif-service that consumes those events and delivers the emails.
Notification Service
The notif-service is intentionally minimal — it has no HTTP server, no database, and no authentication. Its entire job is to sit and listen on a RabbitMQ queue, and for every message it receives, send a confirmation email. It runs as a single long-lived process that blocks until the program is terminated.
The architecture mirrors the user-service in structure — handler → service → repository — but the “handler” here is not an HTTP handler. It is a message loop that drives the consume-process-acknowledge cycle.
RabbitMQ Consumer
The RabbitMQ adapter in the notif-service is the mirror image of the producer in the user-service. Where the user-service declares the queue and publishes to it, the notif-service declares the same queue and consumes from it. Both sides must declare the queue with identical parameters — if one side declares it as durable: true and the other as durable: false, RabbitMQ will reject the second declaration with a 406 PRECONDITION_FAILED error.
Initialization follows the same retry pattern as the user-service for the same reason — RabbitMQ starts slower than Go:
func New(conf *config.RabbitMQ) (*Rabbitmq, error) {
uri := fmt.Sprintf("amqp://%s:%s@%s:%s/", conf.User, conf.Password, conf.Host, conf.Port)
var (
conn *amqp.Connection
err error
)
for i := range maxRetries {
conn, err = amqp.Dial(uri)
if err == nil {
break
}
log.Printf("rabbitmq: connection attempt %d/%d failed: %v", i+1, maxRetries, err)
time.Sleep(retryInterval)
}
if err != nil {
return nil, fmt.Errorf("rabbitmq: failed to connect after %d attempts: %w", maxRetries, err)
}
ch, err := conn.Channel()
if err != nil {
conn.Close()
return nil, fmt.Errorf("rabbitmq: failed to open channel: %w", err)
}
return &Rabbitmq{conn, ch}, nil
}
The Consume method is where the notif-service diverges meaningfully from a simple listener. The most important parameter is auto-ack, and it must be false:
func (mq *Rabbitmq) Consume(q *amqp.Queue) (<-chan amqp.Delivery, error) {
msgs, err := mq.ch.Consume(
q.Name,
"notif-service", // consumer tag
false, // auto-ack: false — ack manually after successful send
false,
false,
false,
nil,
)
if err != nil {
return nil, fmt.Errorf("rabbitmq: failed to consume from queue %q: %w", q.Name, err)
}
return msgs, nil
}
With auto-ack: true, RabbitMQ removes the message from the queue the moment it is delivered to the consumer — before your code has done anything with it. If the notif-service crashes mid-send, or Gmail rejects the request, the message is gone with no way to recover it. Setting auto-ack: false means the message stays in an unacknowledged state in the broker until your code explicitly tells RabbitMQ what to do with it. You then have two outcomes:
- Success → call
msg.Ack(false)to tell RabbitMQ the message was processed and can be removed - Failure → call
msg.Nack(false, true)to tell RabbitMQ the message failed and should be requeued for redelivery
The false argument in both calls means "this acknowledgement applies only to this single message", not to all previously unacknowledged messages on the channel.
Repository
The repository is a thin wrapper that holds a reference to the RabbitMQ adapter and the declared queue. It is constructed once at startup:
func NewNotifRepository(mq *rabbitmq.Rabbitmq) (*NotifRepository, error) {
q, err := mq.DeclareQueue("email_confirm")
if err != nil {
return nil, err
}
return &NotifRepository{
mq: mq,
q: q,
}, nil
}
func (n *NotifRepository) ReceiveNotif(ctx context.Context) (<-chan amqp.Delivery, error) {
return n.mq.Consume(n.q)
}
ReceiveNotif returns a Go channel (<-chan amqp.Delivery) that the service layer can range over. Each value received from the channel is a single RabbitMQ delivery containing the raw message body and the acknowledgement methods.
Port
The port interfaces define the contract between layers:
type NotifRepository interface {
ReceiveNotif(ctx context.Context) (<-chan amqp.Delivery, error)
}
type NotifService interface {
ReceiveNotif(ctx context.Context) (<-chan amqp.Delivery, error)
SendConfirmationEmail(ctx context.Context, msg []byte) error
}
SendConfirmationEmail takes the raw message bytes rather than a parsed struct — this keeps the parsing logic inside the service where it belongs, and means the handler only needs to pass msg.Body through without knowing anything about its shape.
Service
The service layer does two things: proxies ReceiveNotif down to the repository, and implements SendConfirmationEmail which handles parsing, template rendering, and SMTP delivery.
The email template is parsed once at package initialization using template.Must, not on every call to SendConfirmationEmail. Parsing a template on every message would be wasteful and would surface template syntax errors at send time rather than at startup:
var confirmationTmpl = template.Must(template.New("confirmation").Parse(`
<h2>Email Confirmation</h2>
<p>Please confirm your email by clicking the link below:</p>
<p><a href="{{.URL}}">Confirm Email</a></p>
<p>This link will expire in 15 minutes.</p>
`))
Using html/template rather than raw string concatenation matters here. If url were injected directly via +, a malformed or malicious token value could break the HTML structure. html/template escapes values automatically.
SendConfirmationEmail starts by deserializing the message bytes into a map[string]string, then validates that the required fields are present before doing anything else:
func (s *NotifService) SendConfirmationEmail(ctx context.Context, msg []byte) error {
var data map[string]string
if err := util.Deserialize(msg, &data); err != nil {
return fmt.Errorf("notif: failed to deserialize message: %w", err)
}
email, ok := data["email"]
if !ok || email == "" {
return fmt.Errorf("notif: missing email field in message")
}
token, ok := data["confirmation_token"]
if !ok || token == "" {
return fmt.Errorf("notif: missing confirmation_token field in message")
}
url := util.GenerateConfirmationURL(token)
var body bytes.Buffer
if err := confirmationTmpl.Execute(&body, map[string]string{"URL": url}); err != nil {
return fmt.Errorf("notif: failed to render email template: %w", err)
}
// ...
}
Field validation before sending matters because a malformed message that reaches DialAndSend without a recipient address will return a confusing SMTP error. Failing early with a clear message makes debugging much easier.
The confirmation URL is generated from the token using a utility function that constructs the full link pointing back to the user-service’s /confirm-email endpoint:
url := util.GenerateConfirmationURL(token)
// produces: http://127.0.0.1:8080/api/v1/confirm-email?token=<token>
When the user clicks this link, their browser sends a GET request to the user-service, which looks up the token in the database, sets IsVerified = true, and clears the token so it cannot be reused.
Email delivery uses gomail.v2 with Gmail's SMTP server over STARTTLS on port 587. The dialer is constructed from config values — host, port, sender address, and the App Password. d.SSL = false explicitly selects STARTTLS rather than direct TLS (port 465):
m := gomail.NewMessage()
m.SetHeader("From", s.conf.SenderEmail)
m.SetHeader("To", email)
m.SetHeader("Subject", "User registration confirmation")
m.SetBody("text/html", body.String())
d := gomail.NewDialer(s.conf.SMTPHost, s.conf.SMTPPort, s.conf.SenderEmail, s.conf.Password)
d.SSL = false // STARTTLS on port 587
if err := d.DialAndSend(m); err != nil {
return fmt.Errorf("notif: failed to send email to %s: %w", email, err)
}
Gmail setup: Gmail requires an App Password rather than your account password for SMTP access. Go to your Google Account → Security → 2-Step Verification → App Passwords, generate one for “Mail”, and put it in
_SMTP_APP_PASSWORD_in your_.env_. Make sure 2-Step Verification is enabled — App Passwords are not available without it.
Handler
The handler is the message loop. It calls ReceiveNotif to get the delivery channel, then blocks in a for/select loop processing messages one at a time:
func (h *NotifHandler) ReceiveNotif(ctx context.Context) {
msgs, err := h.svc.ReceiveNotif(ctx)
if err != nil {
log.Printf("notif: failed to start consumer: %v", err)
return
}
log.Printf("notif: waiting for messages. To exit press CTRL+C")
for {
select {
case <-ctx.Done():
log.Printf("notif: context cancelled, stopping consumer")
return
case msg, ok := <-msgs:
if !ok {
log.Printf("notif: message channel closed, stopping consumer")
return
}
if err := h.svc.SendConfirmationEmail(ctx, msg.Body); err != nil {
log.Printf("notif: failed to send email, requeuing message: %v", err)
if err := msg.Nack(false, true); err != nil {
log.Printf("notif: failed to nack message: %v", err)
}
continue
}
if err := msg.Ack(false); err != nil {
log.Printf("notif: failed to ack message: %v", err)
}
}
}
}
A few things are worth unpacking here.
The select on ctx.Done() means the consumer loop exits cleanly when the context is cancelled — for example, when the process receives SIGTERM. Without this, the goroutine would block on the channel forever and the process would hang on shutdown.
The ok check on msg, ok := <-msgs detects when the delivery channel has been closed, which happens if the RabbitMQ connection drops. The handler exits gracefully rather than spinning on a closed channel.
On email failure, msg.Nack(false, true) requeues the message. This means a transient Gmail error — a rate limit, a network timeout — will not permanently drop the confirmation email. The message goes back to the front of the queue and will be redelivered on the next consume cycle.
On success, msg.Ack(false) is called only after SendConfirmationEmail returns nil. This is the guarantee: the message is only removed from the broker once we are certain it was delivered. A previous implementation using auto-ack: true would have removed the message before the email was sent, making it impossible to retry on failure.
One important consideration: requeuing on failure can cause a retry loop if a message is permanently malformed — for example, if it contains an invalid email address that Gmail will always reject. In a production system you would add a dead-letter exchange to route repeatedly-failed messages somewhere safe for inspection rather than retrying them forever. For this implementation, the requeue behavior is sufficient.
Wiring it together
main.go for the notif-service is straightforward — no HTTP server, no router, just config, RabbitMQ, and the message loop:
func main() {
conf, err := config.New()
handleError(err, "unable to load .env configs")
slog.Info("configs loaded", "app", conf.App.Name)
mq, err := rabbitmq.New(conf.RabbitMQ)
handleError(err, "rabbitmq connection failed")
defer mq.Close()
slog.Info("rabbitmq connected")
notifRepo, err := repository.NewNotifRepository(mq)
handleError(err, "failed to init notif repository")
notifSvc := service.NewNotifService(notifRepo, conf.SMTP)
notifHandler := handler.NewNotifHandler(notifSvc)
ctx := context.Background()
notifHandler.ReceiveNotif(ctx)
}
ReceiveNotif is called on the main goroutine and blocks — this is intentional. The process has nothing else to do, so there is no need to spin up a separate goroutine and block on a signal channel. When the process is terminated, the deferred mq.Close() runs, shutting down the channel and connection cleanly.
End-to-end flow
With both services running, the full registration and confirmation flow looks like this:
Client user-service RabbitMQ notif-service
| | | |
|-- POST /register ------->| | |
| |-- hash password | |
| |-- generate token | |
| |-- INSERT user | |
| |-- publish message ---->| |
|<-- 201 Created ----------| |-- deliver message -->|
| | | |-- deserialize
| | | |-- render template
| | | |-- send email
| | |<-- Ack --------------|
| | | |
| (user clicks link) | | |
|-- GET /confirm-email?token=... -->| | |
| |-- lookup by token | |
| |-- IsVerified = true | |
| |-- clear token | |
|<-- 200 OK ---------------| | |
The user-service returns 201 Created as soon as the message is published — it does not wait for the email to be sent. The notif-service processes the message independently, and if it fails, RabbitMQ holds the message until the next attempt. The two services are fully decoupled at runtime.
Here’s the closing section:
Wrapping Up
We’ve built a fully decoupled email notification system across two independent Go microservices. The user-service handles the full HTTP lifecycle — registration, JWT authentication, and email confirmation — while the notif-service runs as a dedicated AMQP consumer with a single responsibility: receive a message, send an email.
The core idea is that the user-service never waits for an email to be sent. It publishes an event, returns a response, and moves on. The notif-service picks up that event on its own schedule, retries on failure, and acknowledges only after a successful send. RabbitMQ acts as the durable buffer between the two — if either service restarts, no messages are lost.
Along the way we covered a number of non-obvious correctness issues that are easy to get wrong:
- Durable queues + persistent messages — you need both for messages to survive a broker restart. One without the other is not enough.
- Manual acknowledgement —
auto-ack: truesilently drops messages on consumer crashes. Always ack after processing, nack with requeue on failure. - Retry on startup — Go starts in milliseconds, RabbitMQ takes seconds. A single
amqp.Dialwith no retry will fail on every cold start in Docker Compose. - Algorithm validation in JWT middleware — without checking
token.Method, an attacker can bypass signature verification entirely usingalg: none. - Error interface in
**gin.H**— passingerrors.New("...")to a JSON response serializes to{}. Always use a plain string or.Error(). **Scan**vs**Find**in GORM —Findis for model types. UseScanwhen projecting onto plain DTO structs.
This architecture scales naturally. If email volume grows, you can run multiple instances of the notif-service consuming from the same queue — RabbitMQ distributes messages across consumers automatically. If you need to add other notification types (SMS, push notifications, webhooks), you add new consumers and new queues without touching the user-service at all. The producer doesn’t need to know who is listening.
If you found this article useful, the full source code is available on GitHub — feel free to clone it, open issues, or submit PRs:
🔗 github.com/yehezkiel1086/go-rabbitmq-email-notification