Building Production-Ready Microservices in Go: gRPC + RabbitMQ
A complete walkthrough of an inventory management system using hexagonal architecture, gRPC for synchronous communication, and RabbitMQ for async event-driven messaging.
Microservices are easy to talk about and hard to build correctly. Most tutorials stop at “here’s how to run two services that call each other.” They don’t show you how to structure each service cleanly, how to pick the right communication pattern for each interaction, or how to wire everything together in a way that doesn’t fall apart the moment you add a third service.
This post walks through a real project — a distributed inventory management system — that demonstrates all of these things together. By the end, you’ll understand how to build multiple Go services that communicate via gRPC and RabbitMQ, how to apply hexagonal (clean) architecture to each service, and how all the pieces connect.
The full source code is available at: github.com/yehezkiel1086/go-grpc-inventory-microservices
Here’s the running demo of the services:
The System We’re Building
The system manages inventory for an e-commerce platform. It has five services:
- User Service — authentication, JWT issuance, user management
- Product Service — product catalog, exposes both HTTP and gRPC
- Inventory Service — stock tracking, gRPC only
- Order Service — order creation and management, orchestrates the others
- Notification Service — event-driven notifications via RabbitMQ
Here’s the high-level architecture diagram:

The order flow is what makes this interesting:
- A user creates an order via the Order Service HTTP API
- Order Service calls Product Service (gRPC) to get the product’s current price
- Order Service calls Inventory Service (gRPC) to verify stock is available
- Order Service calls Inventory Service (gRPC) again to deduct the stock
- Order Service publishes an event to RabbitMQ
- Notification Service (a RabbitMQ consumer) receives the event and stores a notification
- The user can later fetch their notifications via the Notification Service HTTP API
Two communication patterns, used deliberately. Let’s talk about why.
Why Two Communication Patterns?
Before writing any code, you need to decide how each pair of services will talk to each other. This is not a style choice — it’s an architectural decision with real consequences.
gRPC for synchronous calls — when Order Service creates an order, it needs the product price and inventory count before it can proceed. There is no order without these values. The call is blocking by design. gRPC gives you strong typing via Protocol Buffers, efficient binary serialization, and automatic code generation for both client and server.
RabbitMQ for async events — when an order is placed, the user eventually should be notified. Whether that notification is delivered 50ms or 500ms later makes no difference to the user experience. More importantly, if the Notification Service is temporarily down, the order should still succeed. The event sits in the queue and gets processed when the service recovers. This is exactly what a message broker is for.
The rule of thumb: if the calling service needs the result to continue, use gRPC. If it’s publishing a fact that others can react to in their own time, use a message broker.
Tech Stack
- Language: Go 1.25
- HTTP Framework: Gin
- RPC Framework: gRPC + Protocol Buffers
- Message Broker: RabbitMQ
- DB: PostgreSQL
- Auth: JWT
Each service gets its own PostgreSQL schema. There is no shared database. This is non-negotiable in a real microservices system — shared databases create invisible coupling between services that defeats the purpose of the architecture.
Project Structure
The repo is a Go monorepo — one go.mod at the root, all services under services/. Shared generated protobuf code lives under services/common/.
.
├── go.mod
├── protobuf/
│ ├── product.proto
│ └── inventory.proto
└── services/
├── common/
│ └── protobuf/ # generated Go code from .proto files
│ ├── product/
│ └── inventory/
├── user-service/
├── product-service/
├── inventory-service/
├── order-service/
└── notif-service/
Each individual service follows the same internal structure:
<service>/
├── cmd/
│ └── http/main.go # or rpc/main.go for gRPC-only services
└── internal/
├── adapter/
│ ├── config/ # Env-based config loading
│ ├── handler/ # HTTP handlers, gRPC handlers, middleware
│ └── storage/
│ ├── postgres/ # DB connection + repository implementations
│ └── rabbitmq/ # MQ connection (where applicable)
└── core/
├── domain/ # Entities, DTOs, errors
├── port/ # Interfaces (the contracts)
├── service/ # Business logic
└── util/ # JWT, password hashing
This is hexagonal architecture (also called ports and adapters). The core/ directory has no knowledge of Gin, GORM, gRPC, or RabbitMQ. It only knows about interfaces. The adapter/ directory implements those interfaces. This makes every piece of infrastructure swappable and every piece of business logic independently testable.
Part 1: Defining the Contracts with Protocol Buffers
Before writing any Go code for the gRPC services, you define your service contracts in .proto files. These are the source of truth — both the server and client generate their code from the same file, guaranteeing type safety across service boundaries.
protobuf/product.proto:
syntax = "proto3";
option go_package = "github.com/yehezkiel1086/go-grpc-inventory-microservices/services/common/protobuf/product";
message Product {
uint64 id = 1;
string name = 2;
double price = 3;
string description = 4;
int32 quantity = 5;
}
service ProductService {
rpc GetProduct (GetProductRequest) returns (GetProductResponse);
rpc CreateProduct (CreateProductRequest) returns (CreateProductResponse);
rpc UpdateProduct (UpdateProductRequest) returns (UpdateProductResponse);
rpc DeleteProduct (DeleteProductRequest) returns (DeleteProductResponse);
}
message GetProductRequest { uint64 id = 1; }
message GetProductResponse { Product product = 1; }
// ...other request/response messages
protobuf/inventory.proto:
syntax = "proto3";
service InventoryService {
rpc GetInventoryByProductID (GetInventoryByProductIDRequest) returns (GetInventoryByProductIDResponse);
rpc UpdateInventory (UpdateInventoryRequest) returns (UpdateInventoryResponse);
// ...other RPCs
}
message GetInventoryByProductIDRequest { uint64 product_id = 1; }
message GetInventoryByProductIDResponse { Inventory inventory = 1; }
message UpdateInventoryRequest {
uint64 id = 1;
int32 quantity = 2;
}
Generate the Go code with:
protoc --go_out=. --go-grpc_out=. protobuf/product.proto
protoc --go_out=. --go-grpc_out=. protobuf/inventory.proto
The generated code goes into services/common/protobuf/ and is imported by any service that needs to call or serve that RPC.
Part 2: The User Service (Auth Foundation)
Every other service validates JWTs, so the User Service establishes the shared auth contract. The JWT claims structure is particularly important — every service needs to read UserID and Role from the token.
util/jwt.go:
type JWTClaims struct {
jwt.RegisteredClaims
UserID uint `json:"user_id"`
Email string `json:"email"`
Name string `json:"name"`
Role domain.Role `json:"role"`
}
func CreateJWTToken(conf *config.JWT, user *domain.User) (string, error) {
duration := time.Duration(conf.DurationMinutes) * time.Minute
claims := &JWTClaims{
UserID: user.ID,
Email: user.Email,
Role: user.Role,
RegisteredClaims: jwt.RegisteredClaims{
ExpiresAt: jwt.NewNumericDate(time.Now().Add(duration)),
},
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
return token.SignedString([]byte(conf.Secret))
}
Domain errors — each service defines its own typed errors rather than propagating raw DB or gRPC errors:
var (
ErrNotFound = errors.New("user not found")
ErrEmailAlreadyExists = errors.New("email already exists")
ErrInvalidCreds = errors.New("invalid credentials")
ErrUnauthorized = errors.New("unauthorized")
ErrForbidden = errors.New("forbidden")
ErrInternalServer = errors.New("internal server error")
)
Role-based access uses typed constants so invalid roles can’t be accidentally created:
type Role uint32
const (
RoleAdmin Role = 5150
RoleUser Role = 2001
)
The auth service is careful to return a generic error for both “email not found” and “wrong password” — returning different errors for each case would allow user enumeration attacks:
func (s *AuthService) Login(ctx context.Context, email, password string) (string, error) {
user, err := s.userRepo.GetUserByEmail(ctx, email)
if err != nil {
if err == domain.ErrNotFound {
return "", domain.ErrInvalidCreds
}
return "", domain.ErrInternalServer
}
if err := util.ComparePassword([]byte(user.Password), []byte(password)); err != nil {
return "", domain.ErrInvalidCreds
}
return util.CreateJWTToken(s.conf, user)
}
The middleware extracts the token from either the Authorization: Bearer header or the jwt_token cookie, so both browser and API clients work:
func Authenticate(conf *config.JWT) gin.HandlerFunc {
return func(c *gin.Context) {
cookie, _ := c.Cookie("jwt_token")
raw := util.ExtractToken(c.GetHeader("Authorization"), cookie)
if raw == "" {
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "missing token"})
return
}
claims, err := util.ParseJWTToken(conf, raw)
if err != nil {
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "invalid or expired token"})
return
}
c.Set("claims", claims)
c.Next()
}
}
RBAC is handled by two composable middleware functions:
func RequireRole(roles ...domain.Role) gin.HandlerFunc { ... }
func RequireSelfOrAdmin(c *gin.Context) {
claims := mustGetClaims(c)
if claims.Role == domain.RoleAdmin {
c.Next()
return
}
paramID, err := parseIDParam(c)
if err != nil { return }
if claims.UserID != paramID {
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": "forbidden"})
return
}
c.Next()
}
Part 3: The Product Service (Dual Transport — HTTP + gRPC)
The Product Service is the most architecturally interesting because it runs two servers simultaneously: an HTTP server for external clients (via nginx) and a gRPC server for internal service-to-service calls.
The key insight is that the same service layer powers both transports. The ProductService business logic is written once. Both the HTTP handler and the gRPC handler call into it.
cmd/http/main.go — runs both servers:
func main() {
// ...setup
productSvc := service.NewProductService(repo)
// gRPC server runs in a goroutine
grpcServer := handler.NewGRPCServer(conf.RPC)
handler.NewProductGRPCHandler(grpcServer.GetServer(), productSvc)
go func() {
slog.Info("gRPC server starting", "port", conf.RPC.Port)
if err := grpcServer.Run(conf.RPC); err != nil {
slog.Error("gRPC server failed", "error", err)
os.Exit(1)
}
}()
// HTTP server runs in the foreground
router, _ := handler.NewRouter(conf.JWT, productSvc)
router.Run(conf.HTTP)
}
The gRPC handler simply translates between protobuf types and domain types — it contains no business logic:
type ProductGRPCHandler struct {
svc port.ProductService
product.UnimplementedProductServiceServer
}
func (h *ProductGRPCHandler) GetProduct(ctx context.Context, req *product.GetProductRequest) (*product.GetProductResponse, error) {
productRes, err := h.svc.GetProductByID(ctx, uint(req.Id))
if err != nil {
return nil, err
}
return &product.GetProductResponse{
Product: &product.Product{
Id: uint64(productRes.ID),
Name: productRes.Name,
Price: productRes.Price,
},
}, nil
}
The UnimplementedProductServiceServer embed is required — it provides default implementations of all RPC methods, so if you add a new RPC to the proto but haven't implemented it yet, you get a proper "unimplemented" error instead of a panic.
Part 4: The Order Service (Orchestrator)
The Order Service is the orchestrator of the system. When a user places an order, this service coordinates three gRPC calls and one RabbitMQ publish — all in a single business transaction.
core/service/order.go — the full CreateOrder flow:
func (s *OrderService) CreateOrder(ctx context.Context, userID uint, req *domain.CreateOrderReq) (*domain.CreateOrderRes, error) {
// 1. Get product price from Product Service via gRPC
productResp, err := s.productClient.GetProduct(ctx, &product.GetProductRequest{
Id: uint64(req.ProductID),
})
if err != nil {
return nil, err
}
// 2. Check stock via Inventory Service gRPC
inventoryResp, err := s.inventoryClient.GetInventoryByProductID(ctx, &inventory.GetInventoryByProductIDRequest{
ProductId: uint64(req.ProductID),
})
if err != nil {
return nil, err
}
if inventoryResp.Inventory.Quantity < int32(req.Quantity) {
return nil, errors.New("insufficient stock")
}
// 3. Deduct stock via Inventory Service gRPC
newQuantity := inventoryResp.Inventory.Quantity - int32(req.Quantity)
_, err = s.inventoryClient.UpdateInventory(ctx, &inventory.UpdateInventoryRequest{
Id: inventoryResp.Inventory.Id,
Quantity: newQuantity,
})
if err != nil {
return nil, err
}
// 4. Persist the order
totalPrice := productResp.Product.Price * float64(req.Quantity)
order, err := s.repo.CreateOrder(ctx, &domain.Order{
UserID: userID,
ProductID: req.ProductID,
Quantity: req.Quantity,
TotalPrice: totalPrice,
Status: domain.OrderStatusPending,
})
if err != nil {
return nil, err
}
// 5. Publish notification event to RabbitMQ (async, fire-and-forget)
msg := fmt.Sprintf("%d %s orders have been placed", order.Quantity, productResp.Product.Name)
if err := s.mq.Publish(ctx, []byte(msg)); err != nil {
return nil, err
}
return &domain.CreateOrderRes{ /* ... */ }, nil
}
Notice that the userID comes from JWT claims injected by the handler — never from the request body. A user can't spoof another user's ID.
main.go — wiring the gRPC clients:
// connect to Product Service gRPC
grpcProductClient, _ := handler.NewGRPCClient(conf.ProductRPC.Host + ":" + conf.ProductRPC.Port)
defer grpcProductClient.Close()
// connect to Inventory Service gRPC
grpcInventoryClient, _ := handler.NewGRPCClient(conf.InventoryRPC.Host + ":" + conf.InventoryRPC.Port)
defer grpcInventoryClient.Close()
// wrap raw gRPC connections into typed clients generated from the proto
inventoryClient := go_grpc_inventory_microservices.NewInventoryServiceClient(grpcInventoryClient.GetConn())
productClient := product.NewProductServiceClient(grpcProductClient.GetConn())
// inject everything
orderSvc := service.NewOrderService(orderRepo, productClient, inventoryClient, rabbitmq)
Part 5: RabbitMQ — Publisher and Consumer
RabbitMQ connects the Order Service (publisher) to the Notification Service (consumer). Both sides declare the same queue with identical parameters — this is how AMQP works: the queue is created idempotently by whichever side connects first.
Publisher (Order Service) — **adapter/storage/rabbitmq/rabbitmq.go**:
type RabbitMQ struct {
conn *amqp.Connection
ch *amqp.Channel
q amqp.Queue
}
func New(conf *config.Rabbitmq) (*RabbitMQ, error) {
conn, _ := amqp.Dial(fmt.Sprintf("amqp://%v:%v@%v:%v/", conf.User, conf.Password, conf.Host, conf.Port))
ch, _ := conn.Channel()
q, _ := ch.QueueDeclare(
"notification",
true,
false,
false,
false,
amqp.Table{amqp.QueueTypeArg: amqp.QueueTypeQuorum}, // quorum queue for reliability
)
return &RabbitMQ{conn, ch, q}, nil
}
func (r *RabbitMQ) Publish(ctx context.Context, message []byte) error {
return r.ch.PublishWithContext(ctx, "", r.q.Name, false, false, amqp.Publishing{
ContentType: "text/plain",
Body: message,
})
}
Consumer (Notification Service) — **cmd/rabbitmq/main.go**:
func main() {
// ...setup db, repo
mq, _ := rabbitmq.New(conf.Rabbitmq)
msgs, _ := mq.Consume()
var forever chan struct{}
go func() {
for d := range msgs {
log.Printf("Received: %s", d.Body)
notifRepo.CreateNotification(ctx, &domain.Notification{
UserID: 1, // TODO: parse from structured message payload
Message: string(d.Body),
Type: "order_notification",
})
}
}()
log.Printf("[*] Waiting for messages. To exit press CTRL+C")
<-forever // block main goroutine indefinitely
}
The <-forever pattern is idiomatic Go for a long-running process that processes work in goroutines. The msgs channel receives deliveries from RabbitMQ, and the goroutine handles each one. If the connection drops, msgs closes and the goroutine exits — in production you'd add reconnection logic here.
One thing to note: the current implementation passes a plain-text string as the message payload. In a production system you’d publish structured JSON that includes the UserID, OrderID, and event type, so the consumer can store proper per-user notifications without hardcoding UserID: 1.
Part 6: The Hexagonal Architecture in Practice
The power of this architecture is visible in the port definitions. Here’s the full Order Service port:
// port/order.go
type OrderRepository interface {
CreateOrder(ctx context.Context, order *domain.Order) (*domain.Order, error)
GetOrderByID(ctx context.Context, id uint) (*domain.Order, error)
GetOrdersByUserID(ctx context.Context, userID uint) ([]domain.Order, error)
UpdateOrder(ctx context.Context, order *domain.Order) (*domain.Order, error)
DeleteOrder(ctx context.Context, id uint) error
}
type OrderService interface {
CreateOrder(ctx context.Context, userID uint, req *domain.CreateOrderReq) (*domain.CreateOrderRes, error)
GetOrderByID(ctx context.Context, id uint) (*domain.GetOrderRes, error)
GetOrdersByUserID(ctx context.Context, userID uint) ([]domain.GetOrderRes, error)
UpdateOrder(ctx context.Context, id uint, req *domain.UpdateOrderReq) (*domain.GetOrderRes, error)
DeleteOrder(ctx context.Context, id uint) error
}
The service layer only ever talks to these interfaces — never to GORM, never to *sql.DB, never to *amqp.Channel. This means:
- You can test the service with a mock repository (no database needed)
- You can swap PostgreSQL for MySQL or MongoDB by writing a new adapter
- You can swap RabbitMQ for Kafka by writing a new adapter, with zero changes to the service
Infrastructure: Docker Compose
The docker-compose.yml runs only the shared infrastructure — PostgreSQL and RabbitMQ. The services themselves run as Go processes (locally with task dev:*, in production as containers or binaries).
services:
db:
image: postgres:17-alpine
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "5432:5432"
rabbitmq:
image: rabbitmq:3.13-management-alpine
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI
environment:
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASSWORD}
The RabbitMQ management UI at http://localhost:15672 is invaluable during development — you can see queues, message rates, and manually publish test messages.
Running the System
# 1. Copy and fill in environment variables
cp .env.example .env
# 2. Start infrastructure
task compose:up
# 3. Start services (each in a separate terminal)
task dev:user:http # :8081
task dev:product:http # :8082 (HTTP)
task dev:product:rpc # :50022 (gRPC)
task dev:inventory:rpc # :50021 (gRPC)
task dev:order:http # :8083
task dev:notif:rabbitmq # RabbitMQ consumer
The order in which you start services matters: Inventory and Product gRPC servers must be up before Order Service tries to connect.
Key Lessons
1. Pick your communication pattern before you write code. gRPC for calls where the result is required, message brokers for events that others react to asynchronously. Mixing these up leads to brittle systems or unnecessary coupling.
2. One database per service, always. It’s tempting to share a database early on. Don’t. Once you share a database, you’ve created a deployment dependency between services and made schema changes a cross-team coordination problem.
3. Hexagonal architecture pays off immediately. Even without writing a single test, having clean port/adapter separation means you can reason about each layer independently. The service layer has no imports from gin, gorm, or amqp — it's just Go interfaces and structs.
4. Proto files are your API contract. Treat them with the same care you’d treat a public REST API. Version them carefully. The generated code means any breaking change is caught at compile time, not at runtime.
5. Structured message payloads from day one. The current implementation publishes plain-text messages to RabbitMQ. In production, publish JSON with a defined schema (UserID, OrderID, EventType, Timestamp). Plain text is fine for a proof of concept — it becomes a problem the moment you need to route messages to different consumers or add new fields.
What’s Next
This project is a solid foundation. Natural next steps would be:
- Distributed tracing with OpenTelemetry — trace a single order request across all five services
- Structured RabbitMQ payloads — move from plain-text messages to JSON with a proper event schema
- gRPC interceptors for logging, metrics, and auth at the RPC layer
- Health checks on each service so the load balancer (or Kubernetes) can restart unhealthy instances
- Circuit breakers on the gRPC client calls — if Product Service is down, Order Service should fail fast rather than queuing up connections
The architecture as built handles all of these additions cleanly, because each concern has a clear home.
The complete source code is at github.com/yehezkiel1086/go-grpc-inventory-microservices. The README has full setup instructions and the Taskfile has shortcuts for every development workflow.