Represents the various scheduling strategies for parallel for loops. Detailed explanations of each scheduling strategy are provided alongside each getter. If no schedule is specified, the default is Schedule.Static.
More...
|
abstract void | LoopInit (int start, int end, uint num_threads, uint chunk_size) |
| Abstract method for builtin schedulers to override for implementing IScheduler. More...
|
|
abstract void | LoopNext (int thread_id, out int start, out int end) |
| Abstract method for builtin schedulers to override for implementing IScheduler. More...
|
|
|
static Schedule | Static [get] |
| The static scheduling strategy. Iterations are divided amongst threads in round-robin fashion. Each thread gets a 'chunk' of iterations, determined by the chunk size. If no chunk size is specified, it's computed as total iterations divided by number of threads. More...
|
|
static Schedule | Dynamic [get] |
| The dynamic scheduling strategy. Iterations are managed in a central queue. Threads fetch chunks of iterations from this queue when they have no assigned work. If no chunk size is defined, a basic heuristic is used to determine a chunk size. More...
|
|
static Schedule | Guided [get] |
| The guided scheduling strategy. Similar to dynamic, but the chunk size starts larger and shrinks as iterations are consumed. The shrinking formula is based on the remaining iterations divided by the number of threads. The chunk size parameter sets a minimum chunk size. More...
|
|
static Schedule | Runtime [get] |
| Runtime-defined scheduling strategy. Schedule is determined by the 'OMP_SCHEDULE' environment variable. Expected format: "schedule[,chunk_size]", e.g., "static,128", "guided", or "dynamic,3". More...
|
|
static Schedule | WorkStealing [get] |
| The work-stealing scheduling strategy. Each thread gets its own local queue of iterations to execute. If a thread's queue is empty, it randomly selects another thread's queue as its "victim" and steals half of its remaining iterations. The chunk size parameter specifies how many iterations a thread should execute from its local queue at a time. More...
|
|
Represents the various scheduling strategies for parallel for loops. Detailed explanations of each scheduling strategy are provided alongside each getter. If no schedule is specified, the default is Schedule.Static.
◆ LoopInit()
abstract void DotMP.Schedule.LoopInit |
( |
int |
start, |
|
|
int |
end, |
|
|
uint |
num_threads, |
|
|
uint |
chunk_size |
|
) |
| |
|
pure virtual |
◆ LoopNext()
abstract void DotMP.Schedule.LoopNext |
( |
int |
thread_id, |
|
|
out int |
start, |
|
|
out int |
end |
|
) |
| |
|
pure virtual |
◆ dynamic_scheduler
Internal holder for the DynamicScheduler object.
◆ guided_scheduler
Internal holder for the GuidedScheduler object.
◆ runtime_scheduler
Internal holder for the RuntimeScheduler object.
◆ static_scheduler
Internal holder for StaticScheduler object.
◆ workstealing_scheduler
Internal holder for the WorkStealingScheduler object.
◆ Dynamic
The dynamic scheduling strategy. Iterations are managed in a central queue. Threads fetch chunks of iterations from this queue when they have no assigned work. If no chunk size is defined, a basic heuristic is used to determine a chunk size.
Pros:
Cons:
◆ Guided
The guided scheduling strategy. Similar to dynamic, but the chunk size starts larger and shrinks as iterations are consumed. The shrinking formula is based on the remaining iterations divided by the number of threads. The chunk size parameter sets a minimum chunk size.
Pros:
Cons:
- Might not handle loops with early heavy load imbalance efficiently.
◆ Runtime
Runtime-defined scheduling strategy. Schedule is determined by the 'OMP_SCHEDULE' environment variable. Expected format: "schedule[,chunk_size]", e.g., "static,128", "guided", or "dynamic,3".
◆ Static
The static scheduling strategy. Iterations are divided amongst threads in round-robin fashion. Each thread gets a 'chunk' of iterations, determined by the chunk size. If no chunk size is specified, it's computed as total iterations divided by number of threads.
Pros:
Cons:
- Potential for load imbalance.
Note: This is the default strategy if none is chosen.
◆ WorkStealing
The work-stealing scheduling strategy. Each thread gets its own local queue of iterations to execute. If a thread's queue is empty, it randomly selects another thread's queue as its "victim" and steals half of its remaining iterations. The chunk size parameter specifies how many iterations a thread should execute from its local queue at a time.
Pros:
- Good approximation of optimal load balancing.
- No contention over a shared queue.
Cons:
- Stealing can be an expensive operation.
The documentation for this class was generated from the following file: