Job Auto-Advancement
Job Auto-Advancement is the mechanism by which completing a job at one Station automatically moves it to the next station in the Station Process Flow.
How It Works
When complete_ops_job() is called, after recording parts and closing the station history entry, it checks the ops_station_connections table:
SELECT COUNT(*) INTO _next_station_count
FROM ops_station_connections sc
WHERE sc.from_station_id = _current_station_id
AND sc.connection_type = 'feeds_into';Decision Logic
flowchart TD A[Job completes at Station A] --> B[Count 'feeds_into' connections from Station A] B --> C{Count?} C -->|= 1| D[Auto-advance to the single next station] C -->|= 0| E[End of line: mark job 'completed'] C -->|>= 2| F[Multiple paths: prompt operator to select] D --> G[Reset job to 'pending' at next station] F --> K{Operator selects?} K -->|Yes| G K -->|No| L[Job enters 'queued' state at current station] G --> H[Clear started_at/started_by/completed_at/completed_by] G --> I[Set posted_at = completion time] G --> J[Create new station_history entry]
Only feeds_into connections trigger auto-advancement. rework connections are ignored.
What Happens During Auto-Advance
- Job state resets to
pendingat the new station - Lifecycle timestamps are reset for the new station leg:
started_at,started_by,completed_at,completed_byare clearedposted_atis set to the completion time of the previous stationposted_byis set to the user who completed at the previous station
- Part counts on
ops_jobsare updated to reflect the current pipeline yield:good_parts= the latest station’s good output (current yield, not cumulative)scrap_parts= sum of scrap across ALL stations (scrap is permanently removed at each stage)
- Per-station snapshot is saved in
ops_job_station_history:good_parts_at_station/scrap_parts_at_station— what was recorded at this stationstarted_at/by,completed_at/by,posted_at/by— preserved from the job before reset
- New station history entry is created at the next station with incremented
entry_number
Parts Flow as a Pipeline
Parts flow through stations like a pipeline. Each station receives the previous station’s good output as its input. Parts can only decrease or stay the same — they never increase.
- Station input = previous station’s good output (or the initial batch size for the first station)
- Station output: good + scrap must equal input (any gap is unaccounted loss)
- Job
good_parts= the latest station’s good output (the current yield) - Job
scrap_parts= the sum of all scrap across every station the job has passed through
Multiple Next Stations (Operator Selection)
When there are 2+ feeds_into connections from the current station, the system does not auto-advance. Instead, it prompts the operator to choose which station the job goes to next. The system never guesses which path to take.
Two outcomes:
- Operator selects a station — the job advances to that station as
pending, following the same reset logic as auto-advance (timestamps cleared, new station history entry created). - Operator declines to select — the job enters the
queuedstate at the current station. It remains there until an operator selects a destination, at which point the job transitions topendingat the chosen station. See Job Lifecycle for the full state machine.
No Next Station (End of Line)
When there are 0 feeds_into connections from the current station, the job is marked as completed. This is a terminal state — the job has reached the end of the process flow.
Example Flow
Station A (Assembly) --feeds_into--> Station B (Testing) --feeds_into--> Station C (Packaging)
\--rework--> Station A (Assembly)
- Job posted at Station A with a batch of 12 parts:
pending - Operator starts job:
in_progress - Operator completes with 10 good, 2 scrap:
- Station A history:
good_parts_at_station=10, scrap_parts_at_station=2 - Job totals:
good_parts=10, scrap_parts=2 - 10 parts advance to Station B (Station A’s good output)
- 1
feeds_intofrom A → auto-advance to Station B
- Station A history:
- Job is now
pendingat Station B with 10 parts as input - Operator starts and completes with 8 good, 2 scrap:
- Station B history:
good_parts_at_station=8, scrap_parts_at_station=2 - Job totals:
good_parts=8, scrap_parts=4(2 from A + 2 from B) - 8 parts advance to Station C (Station B’s good output)
- 1
feeds_intofrom B → auto-advance to Station C
- Station B history:
- Operator starts and completes Station C with 7 good, 1 scrap:
- Station C history:
good_parts_at_station=7, scrap_parts_at_station=1 - Job final:
good_parts=7, scrap_parts=5(2 + 2 + 1) - 0
feeds_intofrom C → job markedcompleted
- Station C history:
Summary: 12 parts entered the pipeline. 7 came out good. 5 were scrapped across three stations. Every station’s good + scrap equals its input.
Rework Flow
When Station B has a rework connection back to Station A, an operator can send defective parts back for reprocessing instead of scrapping them.
Station B (Testing): input=10
→ 6 good (advance to Station C)
→ 2 scrap (permanently removed)
→ 2 rework (sent back to Station A)
The 2 reworked parts re-enter Station A as new input. Station A processes them as a separate pass, captured in a new station history entry with an incremented entry_number:
Station A (Assembly) -- rework pass: input=2, good=2, scrap=0
→ 2 parts advance back to Station B
Station B then receives those 2 parts as additional input, again tracked as a new history entry. Each pass through a station is recorded independently — the station history captures the full audit trail of every pass.
Rework connections do not trigger auto-advancement. The operator explicitly sends parts back via the rework action.
Concurrency Control
The complete_ops_job() function uses SELECT ... FOR UPDATE to lock the job row, preventing:
- Two operators completing the same job simultaneously
- Race conditions during the auto-advance check
Codebase Paths
- Complete function with auto-advance:
database/sql_scripts/functions/ops/ops_job_functions.sql(complete_ops_job) - Connection table:
database/sql_scripts/tables/ops_station_connections.sql