Difficulty: Easy
Correct Answer: A batch job (process many records in scheduled runs)
Explanation:
Introduction / Context:
Workloads that touch many records in the same file—like monthly billing, payroll, or mass updates—benefit from throughput-oriented processing. Batch jobs are optimized for sequential, large-volume work with minimal user interaction.
Given Data / Assumptions:
Concept / Approach:
Batch processing amortizes setup costs across many records, leverages sequential I/O, and maximizes throughput. Real-time/online jobs suit low-latency, per-transaction needs; they are suboptimal for mass processing due to interaction overhead and locking contention.
Step-by-Step Solution:
1) Classify workload: many-record, uniform operations (e.g., compute statements).2) Choose batch for sequential scans and aggregation efficiency.3) Schedule during off-peak hours to reduce contention.4) Monitor I/O and memory to ensure sustained throughput.
Verification / Alternative check:
Performance benchmarks commonly show batch outperforming interactive modes for bulk work. ETL pipelines exemplify this pattern.
Why Other Options Are Wrong:
Real-time/online jobs prioritize latency and user interaction, not bulk throughput. Declaring all options equally optimal ignores workload characteristics.
Common Pitfalls:
Running massive updates through interactive screens, causing timeouts and user frustration.
Final Answer:
A batch job (process many records in scheduled runs).
Discussion & Comments