← Back to Blog
Automation by

How a Manufacturing Company Automated Delivery Exception Alerts Across 3 Warehouses

How a Manufacturing Company Automated Delivery Exception Alerts Across 3 Warehouses

When three warehouses feed one Slack channel, every exception looks equally urgent. Here's how we separated the noise from the fires — and cut critical response time from 4 hours to 18 minutes.

Key Takeaways: A single n8n workflow can poll Odoo’s stock.picking records across multiple warehouses and route alerts by severity — critical exceptions reach the ops manager immediately, routine warnings arrive in a daily digest. The exception detection logic covers three failure patterns: late shipments, partial deliveries, and carrier mismatches. Cross-warehouse aggregation gives the ops team a single view instead of per-warehouse noise. Response time to delivery exceptions dropped from 4 hours to under 20 minutes.

The operations manager at a mid-sized industrial parts manufacturer was getting 60+ Slack messages a day about deliveries. Most were informational. A handful were fires. She couldn’t tell the difference without clicking through to Odoo.

That’s the real problem this setup solves. Not that exceptions exist — they always will. But that every exception looks the same in the alert stream.

The Before State: Noise That Masked Signal

Before the automation, the company managed three warehouses — two in Vietnam, one in the Philippines — all running Odoo 17. Warehouse staff created transfers in Inventory > Transfers, and a loose internal policy required updating stock.picking states before end of day.

In practice, that didn’t happen consistently. A late shipment might sit in ready state for 36 hours. A partial delivery — where only 7 of 10 ordered units arrived — would be recorded as done with no flag. A carrier mismatch (the booking said DHL, the actual carrier was VNPost) wouldn’t surface at all unless someone caught it on the delivery note.

The result: customer-facing delays that ops didn’t know about until the customer called.

Why n8n and Not a Custom Odoo Module

The team considered two paths: a custom Odoo module with automated server actions, or an n8n workflow. They went with n8n for one practical reason — warehouse coordinators needed to adjust detection thresholds without involving a developer.

What counts as “late”? Which carrier combinations trigger a mismatch? Those rules shift with the business. n8n’s visual workflow editor made them editable by a non-technical admin. An Odoo module would have required a code change and a deployment every time.

The tradeoff is real: n8n adds an external dependency. If the n8n instance goes down, alerting stops. That was acceptable here because this isn’t a production line stop — it’s an ops notification. For genuinely mission-critical logic, keep it inside Odoo.

Three Exception Detectors

The n8n workflow runs every 15 minutes. It makes three separate JSON-RPC calls to Odoo — one per exception type — then aggregates results before routing.

1. Late shipments

Transfers stuck in ready state more than 2 hours past their scheduled_date trigger a late-shipment alert. The query targets outgoing pickings only:

// n8n HTTP Request node — Late Shipment Check
{
  "jsonrpc": "2.0",
  "method": "call",
  "params": {
    "service": "object",
    "method": "execute_kw",
    "args": [
      "mydb", uid, password,
      "stock.picking", "search_read",
      [[
        ["state", "=", "ready"],
        ["scheduled_date", "<", "{{$now.minus({hours: 2}).toISO()}}"],
        ["picking_type_code", "=", "outgoing"]
      ]],
      {
        "fields": ["name", "scheduled_date", "partner_id", "warehouse_id", "carrier_id"],
        "limit": 50
      }
    ]
  }
}

The 2-hour buffer was calibrated over two weeks of baseline data. Below 2 hours generated too many false positives from normal processing delays. That number will be different for your operation — more on this below.

2. Partial deliveries

These are stock.picking records in done state where stock.move lines have a quantity_done that doesn’t match product_uom_qty. The check looks back 24 hours to avoid re-alerting on old partials:

# Python equivalent of the n8n detection logic
from datetime import timedelta
from odoo import fields

pickings = env['stock.picking'].search([
    ('state', '=', 'done'),
    ('picking_type_code', '=', 'outgoing'),
    ('date_done', '>=', fields.Datetime.now() - timedelta(hours=24))
])

partials = []
for picking in pickings:
    for move in picking.move_ids_without_package:
        if move.quantity_done < move.product_uom_qty:
            partials.append({
                'picking': picking.name,
                'product': move.product_id.name,
                'ordered': move.product_uom_qty,
                'delivered': move.quantity_done,
                'partner': picking.partner_id.name,
            })

Partial deliveries always route as critical. A shortfall the customer didn’t pre-approve is always a problem, regardless of the quantity gap.

3. Carrier mismatches

The company maintains a mapping in a Google Sheet — a 12-row table of expected carrier per delivery zone. Not worth an Odoo modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… for 12 rows. The n8n workflow reads that table once at startup and caches it, then compares the delivery.carrier on each outgoing stock.picking against the expected carrier for that res.partner’s city.

Zone-level carrier mismatches are warning-level. If the destination country differs from what was booked, they escalate to critical.

Severity Routing: Two Channels, One Rule

The aggregated exceptions feed into a branching node that splits on severity.

Critical exceptions — partial deliveries, country-level carrier mismatches, shipments late by more than 6 hours:

  • Immediate Slack message to #ops-critical with the picking name, partner, warehouse, and a direct link to the Odoo record
  • Also creates an Odoo activity on the stock.picking record so it appears in the responsible user’s inbox

Warning exceptions — routine late shipments under 6 hours, zone-level carrier mismatches:

  • Held in a buffer variable throughout the day
  • Sent as a single digest to #ops-daily at 17:30, grouped by warehouse

The digest format matters more than it sounds. Before this change, warning-level alerts arrived as individual Slack messages throughout the day. Ops staff either ignored them (alert fatigue) or interrupted their work to check each one. The daily digest converted warnings from a stream of interruptions into a scheduled review task.

Cross-Warehouse Aggregation

All three warehouses feed the same workflow. Each query filters by warehouse_id using the Odoo stock.warehouse IDs stored in the n8n credential config. The aggregation step groups exceptions by warehouse before routing.

The first version of the workflow sent separate Slack messages per warehouse per exception type. On a busy day, that was 40 messages. The ops manager turned off channel notifications within a week.

Aggregation isn’t optional. If you’re polling multiple data sources and posting results separately, you’ve rebuilt the noise problem you were trying to solve.

The final digest message structure looks like this:

📦 Ops Daily Digest — 27 Apr 2026

*Hanoi Warehouse*
• SO/00412 — DHL booked, VNPost dispatched (zone mismatch)
• SO/00398 — Scheduled 10:00, still in Ready state

*Ho Chi Minh Warehouse*
• (no warnings)

*Manila Warehouse*
• SO/00441 — Scheduled 11:30, still in Ready state
• SO/00445 — Scheduled 13:00, still in Ready state

One message. All three warehouses. Ops reviews it at end of day and decides what needs follow-up.

The Outcome

After 6 weeks in production:

  • Response time to critical exceptions: down from 4.2 hours average to 18 minutes (measured by time between scheduled_date breach and first Odoo activity action by warehouse staff)
  • Daily Slack message volume: 35–50 individual messages replaced by 1 grouped digest
  • Partial delivery catch rate: 100% — every shortfall now creates an Odoo activity before the shift ends
  • Customer escalations from unannounced short shipments: down 70% in the first month

The carrier mismatch detection also uncovered a systemic booking error in the Philippines warehouse — a default carrier had been misconfigured for 3 months and no one had noticed because deliveries were eventually arriving. Finding it took 2 days of n8n alert history and one conversation with the warehouse coordinator. Manual review of the same data would have required pulling 90 days of pickings.

What Transfers to Your Setup

You don’t need three warehouses to use this pattern. The exception detection logic works for a single warehouse, and the same JSON-RPC queries apply to any Odoo 16 or 17 instance.

A few things to get right before you build:

Calibrate your late threshold before going live. Two hours worked here. For same-day fulfillment, 30 minutes might be the threshold. For weekly B2B shipments, 24 hours. Run a week of silent detection — log exceptions without sending alerts — to find your natural baseline before you start paging people.

Don’t mix warning and critical in the same channel. This is the fastest path to alert fatigue. Two channels, or a digest modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… — pick one before you configure the routing.

The Odoo activity creation is load-bearing. Slack messages can be missed. An activity on the stock.picking record appears in the responsible user’s Odoo inbox and stays there until marked done. The combination of Slack (fast, ambient) and Odoo activity (persistent, accountable) is more reliable than either alone.

This is the same pattern we used in the n8n + Odoo uninvoiced sales order alert setup — a lightweight external workflow that creates persistent Odoo records rather than just posting into the void.


Key Takeaways

  • Three exception types cover most outbound delivery failures: late shipments, partial deliveries, and carrier mismatches
  • Severity-based routing — immediate Slack for critical, daily digest for warnings — is the difference between a useful alert system and an ignored one
  • Cross-warehouse aggregation must be built from the start; per-warehouse per-type messages rebuild the noise problem you were trying to solve
  • Creating Odoo activities on flagged stock.picking records gives alerts persistence and accountability that Slack notifications alone cannot provide
  • Silent detection before going live lets you calibrate thresholds against real data rather than guessing

At Trobz, we build n8n + Odoo workflows for operations teams that are managing exceptions manually. If your warehouse runs on Odoo and you’re still checking transfers by hand, reach out — a first version of this usually runs within a day.

Ready to put AI to work?

Let's explore how Trobz AI can automate your processes, enhance your ERP, and help your team make better decisions — faster.