DotMP
Static Public Member Functions | Static Package Attributes | Static Private Member Functions | Static Private Attributes | List of all members
DotMP.Parallel Class Reference

The main class of DotMP. Contains all the main methods for parallelism. For users, this is the main class you want to worry about, along with Lock, Shared, Atomic, and GPU. More...

Static Public Member Functions

static void For (int start, int end, Action< int > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a for loop inside a parallel region. A for loop created with For inside of a parallel region is executed in parallel, with iterations being distributed among the threads, and potentially out-of-order. A schedule is provided to inform the runtime how to distribute iterations of the loop to threads. Available schedules are specified by the Schedule enum, and have detailed documentation in the Iter class. Acts as an implicit Barrier(). More...
 
static void ForCollapse ((int, int) firstRange,(int, int) secondRange, Action< int, int > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForCollapse ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange, Action< int, int, int > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForCollapse ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange,(int, int) fourthRange, Action< int, int, int, int > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForCollapse ((int, int)[] ranges, Action< int[]> action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForReduction< T > (int start, int end, Operations op, ref T reduce_to, ActionRef< T > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a for loop inside a parallel region with a reduction. This is similar to For(), but the reduction allows multiple threads to reduce their work down to a single variable. Using ForReduction<T> allows the runtime to perform this operation much more efficiently than a naive approach using the Locking or Atomic classes. Each thread gets a thread-local version of the reduction variable, and the runtime performs a global reduction at the end of the loop. Since the global reduction only involves as many variables as there are threads, it is much more efficient than a naive approach. Acts as an implicit Barrier(). More...
 
static void ForReductionCollapse< T > ((int, int) firstRange,(int, int) secondRange, Operations op, ref T reduce_to, ActionRef2< T > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForReductionCollapse< T > ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange, Operations op, ref T reduce_to, ActionRef3< T > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForReductionCollapse< T > ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange,(int, int) fourthRange, Operations op, ref T reduce_to, ActionRef4< T > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ForReductionCollapse< T > ((int, int)[] ranges, Operations op, ref T reduce_to, ActionRefN< T > action, IScheduler schedule=null, uint? chunk_size=null)
 Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops. More...
 
static void ParallelRegion (Action action, uint? num_threads=null)
 Creates a parallel region. The body of a parallel region is executed by as many threads as specified by the num_threads parameter. If the num_threads parameter is absent, then the runtime checks if SetNumThreads has been called. If so, it will use that many threads. If not, the runtime will try to use as many threads as there are logical processors. More...
 
static void ParallelFor (int start, int end, Action< int > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel for loop. Contains all of the parameters from ParallelRegion() and For(). This is simply a convenience method for creating a parallel region and a for loop inside of it. More...
 
static void ParallelForReduction< T > (int start, int end, Operations op, ref T reduce_to, ActionRef< T > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel for loop with a reduction. Contains all of the parameters from ParallelRegion() and ForReduction<T>(). This is simply a convenience method for creating a parallel region and a for loop with a reduction inside of it. More...
 
static void ParallelForCollapse ((int, int) firstRange,(int, int) secondRange, Action< int, int > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop. More...
 
static void ParallelForCollapse ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange, Action< int, int, int > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop. More...
 
static void ParallelForCollapse ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange,(int, int) fourthRange, Action< int, int, int, int > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop. More...
 
static void ParallelForCollapse ((int, int)[] ranges, Action< int[]> action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop. More...
 
static void ParallelForReductionCollapse< T > ((int, int) firstRange,(int, int) secondRange, Operations op, ref T reduce_to, ActionRef2< T > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it. More...
 
static void ParallelForReductionCollapse< T > ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange, Operations op, ref T reduce_to, ActionRef3< T > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it. More...
 
static void ParallelForReductionCollapse< T > ((int, int) firstRange,(int, int) secondRange,(int, int) thirdRange,(int, int) fourthRange, Operations op, ref T reduce_to, ActionRef4< T > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it. More...
 
static void ParallelForReductionCollapse< T > ((int, int)[] ranges, Operations op, ref T reduce_to, ActionRefN< T > action, IScheduler schedule=null, uint? chunk_size=null, uint? num_threads=null)
 Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it. More...
 
static void Sections (params Action[] actions)
 Creates a sections region. Sections allows for the user to submit multiple, individual tasks to be distributed among threads in parallel. In parallel, each thread active will dequeue a callback and execute it. This is useful if you have lots of individual tasks that need to be executed in parallel, and each task requires its own lambda. Acts as an implicit Barrier(). More...
 
static TaskUUID Task (Action action, params TaskUUID[] depends)
 Enqueue a task into the task queue. Differing from OpenMP, there is no concept of parent or child tasks as of yet. All tasks submitted are treated equally in a central task queue. More...
 
static void Taskwait (params TaskUUID[] tasks)
 Wait for selected tasks in the queue to complete, or for the full queue to empty if no tasks are specified. Acts as an implicit Barrier() if it is not called from within a task. More...
 
static TaskUUID[] Taskloop (int start, int end, Action< int > action, uint? grainsize=null, uint? num_tasks=null, bool only_if=true, params TaskUUID[] depends)
 Creates a number of tasks to complete a for loop in parallel. If neither grainsize nor num_tasks are specified, a grainsize is calculated on-the-fly. If both grainsize and num_tasks are specified, the num_tasks parameter takes precedence over grainsize. More...
 
static void ParallelMasterTaskloop (int start, int end, Action< int > action, uint? grainsize=null, uint? num_tasks=null, uint? num_threads=null, bool only_if=true)
 Wrapper around Parallel.ParallelRegion(), Parallel.Master(), and Parallel.Taskloop(). More...
 
static void ParallelMaster (Action action, uint? num_threads=null)
 Wrapper around Parallel.ParallelRegion() and Parallel.Master(). More...
 
static void MasterTaskloop (int start, int end, Action< int > action, uint? grainsize=null, uint? num_tasks=null, bool only_if=true)
 Wrapper around Parallel.Master() and Parallel.Taskloop(). More...
 
static void ParallelSections (uint? num_threads=null, params Action[] actions)
 Creates a parallel sections region. Contains all of the parameters from ParallelRegion() and Sections(). This is simply a convenience method for creating a parallel region and a sections region inside of it. More...
 
static int Critical (int id, Action action)
 Creates a critical region. A critical region is a region of code that can only be executed by one thread at a time. If a thread encounters a critical region while another thread is inside a critical region, it will wait until the other thread is finished. More...
 
static void Critical (Action action, [CallerFilePath] string path="", [CallerLineNumber] int line=0)
 Creates a critical region. A critical region is a region of code that can only be executed by one thread at a time. If a thread encounters a critical region while another thread is inside a critical region, it will wait until the other thread is finished. More...
 
static void Barrier ()
 Creates a barrier. All threads must reach the barrier before any thread can continue. This is useful for synchronization. Many functions inside the Parallel class act as implicit barriers. Also acts as a memory barrier. More...
 
static int GetNumProcs ()
 Gets the number of available processors on the host system. More...
 
static void Master (Action action)
 Creates a master region. The master region is a region of code that is only executed by the master thread. The master thread is the thread with a thread ID of 0. You can get the thread ID of the calling thread with GetThreadNum(). More...
 
static void Single (int id, Action action)
 Creates a single region. A single region is only executed once per Parallel.ParallelRegion. The first thread to encounter the single region marks the region as encountered, then executes it. More...
 
static void Single (Action action, [CallerFilePath] string path="", [CallerLineNumber] int line=0)
 Creates a single region. A single region is only executed once per Parallel.ParallelRegion. The first thread to encounter the single region marks the region as encountered, then executes it. More...
 
static void Ordered (int id, Action action)
 Creates an ordered region. An ordered region is a region of code that is executed in order inside of a For() or ForReduction<T>() loop. This also acts as an implicit Critical() region. More...
 
static void Ordered (Action action, [CallerFilePath] string path="", [CallerLineNumber] int line=0)
 Creates an ordered region. An ordered region is a region of code that is executed in order inside of a For() or ForReduction<T>() loop. This also acts as an implicit Critical() region. More...
 
static int GetNumThreads ()
 Gets the number of active threads. If not inside of a ParallelRegion(), returns 1. More...
 
static int GetThreadNum ()
 Gets the ID of the calling thread. More...
 
static void SetNumThreads (int num_threads)
 Sets the number of threads that will be used in the next parallel region. More...
 
static int GetMaxThreads ()
 Gets the maximum number of threads that will be used in the next parallel region. More...
 
static bool InParallel ()
 Gets whether or not the calling thread is in a parallel region. More...
 
static void SetDynamic ()
 Tells the runtime to dynamically adjust the number of threads. More...
 
static bool GetDynamic ()
 Gets whether or not the runtime is dynamically adjusting the number of threads. More...
 
static void SetNested (bool _)
 Enables nested parallelism. This function is not implemented, as nested parallelism does not exist in the current version of DotMP. There are no plans to implement nested parallelism at the moment. More...
 
static bool GetNested ()
 Gets whether or not nested parallelism is enabled. There are no plans to implement nested parallelism at the moment. More...
 
static double GetWTime ()
 Gets the wall time as a double, representing the number of seconds since the epoch. More...
 
static IScheduler GetSchedule ()
 Returns the current schedule being used in a For() or ForReduction<T>() loop. More...
 
static uint GetChunkSize ()
 Returns the current chunk size being used in a For() or ForReduction<T>() loop. More...
 

Static Package Attributes

static volatile bool canceled = false
 Determines if the current threadpool has been marked to terminate. More...
 

Static Private Member Functions

static void FixArgs (int start, int end, ref IScheduler sched, ref uint? chunk_size, uint num_threads)
 Fixes the arguments for a parallel for loop. If a Schedule is set to Static, Dynamic, or Guided, then the function simply calculates chunk size if none is given. If a Schedule is set to Runtime, then the function checks the OMP_SCHEDULE environment variable and sets the appropriate values. More...
 
static void ValidateParams (int start=0, int end=0, IScheduler schedule=null, uint? num_threads=null, uint? chunk_size=null, uint? num_tasks=null, uint? grainsize=null)
 Validates all parameters passed to DotMP functions. More...
 
static string FormatCaller (string filename, int linenum)
 Formats the caller information for determining uniqueness of a call. More...
 
static void For< T > (int start, int end, ForAction< T > forAction, IScheduler schedule=null, uint? chunk_size=null, Operations? op=null)
 Internal handler for For. More...
 
static void ForReduction< T > (int start, int end, Operations op, ref T reduce_to, ForAction< T > action, IScheduler schedule=null, uint? chunk_size=null)
 Internal handler for ForReduction. More...
 

Static Private Attributes

static volatile Dictionary< string, object > critical_lock = new Dictionary<string, object>()
 The dictionary for critical regions. More...
 
static volatile HashSet< string > single_thread = new HashSet<string>()
 The dictionary for single regions. More...
 
static volatile Dictionary< string, int > ordered = new Dictionary<string, int>()
 The dictionary for ordered regions. More...
 
static volatile Barrier barrier
 Barrier object for DotMP.Parallel.Barrier() More...
 
static volatile uint num_threads = 0
 Number of threads to be used in the next parallel region, where 0 means that it will be determined on-the-fly. More...
 
static ThreadLocal< int > thread_num = new ThreadLocal<int>(() => Convert.ToInt32(Thread.CurrentThread.Name))
 Current thread num, cached. More...
 
static ThreadLocal< uint > task_nesting = new ThreadLocal<uint>(() => 0)
 The level of task nesting, to determine when to enact barriers and reset the DAG. More...
 

Detailed Description

The main class of DotMP. Contains all the main methods for parallelism. For users, this is the main class you want to worry about, along with Lock, Shared, Atomic, and GPU.

Member Function Documentation

◆ Barrier()

static void DotMP.Parallel.Barrier ( )
inlinestatic

Creates a barrier. All threads must reach the barrier before any thread can continue. This is useful for synchronization. Many functions inside the Parallel class act as implicit barriers. Also acts as a memory barrier.

Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ Critical() [1/2]

static void DotMP.Parallel.Critical ( Action  action,
[CallerFilePath] string  path = "",
[CallerLineNumber] int  line = 0 
)
inlinestatic

Creates a critical region. A critical region is a region of code that can only be executed by one thread at a time. If a thread encounters a critical region while another thread is inside a critical region, it will wait until the other thread is finished.

Parameters
actionThe action to be performed in the critical region.
lineThe line number this method was called from.
pathThe path to the file this method was called from.
Returns
The ID of the critical region.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ Critical() [2/2]

static int DotMP.Parallel.Critical ( int  id,
Action  action 
)
inlinestatic

Creates a critical region. A critical region is a region of code that can only be executed by one thread at a time. If a thread encounters a critical region while another thread is inside a critical region, it will wait until the other thread is finished.

THIS METHOD IS NOW DEPRECATED.

Parameters
idThe ID of the critical region. Must be unique per region but consistent across all threads.
actionThe action to be performed in the critical region.
Returns
The ID of the critical region.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ FixArgs()

static void DotMP.Parallel.FixArgs ( int  start,
int  end,
ref IScheduler  sched,
ref uint?  chunk_size,
uint  num_threads 
)
inlinestaticprivate

Fixes the arguments for a parallel for loop. If a Schedule is set to Static, Dynamic, or Guided, then the function simply calculates chunk size if none is given. If a Schedule is set to Runtime, then the function checks the OMP_SCHEDULE environment variable and sets the appropriate values.

Parameters
startThe start of the loop.
endThe end of the loop.
schedThe schedule of the loop.
chunk_sizeThe chunk size of the loop.
num_threadsThe number of threads to be used in the loop.

◆ For()

static void DotMP.Parallel.For ( int  start,
int  end,
Action< int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a for loop inside a parallel region. A for loop created with For inside of a parallel region is executed in parallel, with iterations being distributed among the threads, and potentially out-of-order. A schedule is provided to inform the runtime how to distribute iterations of the loop to threads. Available schedules are specified by the Schedule enum, and have detailed documentation in the Iter class. Acts as an implicit Barrier().

Parameters
startThe start of the loop, inclusive.
endThe end of the loop, exclusive.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ For< T >()

static void DotMP.Parallel.For< T > ( int  start,
int  end,
ForAction< T >  forAction,
IScheduler  schedule = null,
uint?  chunk_size = null,
Operations op = null 
)
inlinestaticprivate

Internal handler for For.

Parameters
startThe start of the loop, inclusive.
endThe end of the loop, exclusive.
forActionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
opThe operation to be performed in the case of reduction loops.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ForCollapse() [1/4]

static void DotMP.Parallel.ForCollapse ( (int, int)  firstRange,
(int, int)  secondRange,
Action< int, int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.


◆ ForCollapse() [2/4]

static void DotMP.Parallel.ForCollapse ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
Action< int, int, int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ ForCollapse() [3/4]

static void DotMP.Parallel.ForCollapse ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
(int, int)  fourthRange,
Action< int, int, int, int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
fourthRangeA tuple representing the start and end of the fourth for loop.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ ForCollapse() [4/4]

static void DotMP.Parallel.ForCollapse ( (int, int)[]  ranges,
Action< int[]>  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Parameters
rangesA tuple representing the start and end of each of the for loops.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ FormatCaller()

static string DotMP.Parallel.FormatCaller ( string  filename,
int  linenum 
)
inlinestaticprivate

Formats the caller information for determining uniqueness of a call.

Parameters
filenameThe calling file.
linenumThe calling line number.
Returns
A formatted string representing "{filename}:{linenum}"

◆ ForReduction< T >() [1/2]

static void DotMP.Parallel.ForReduction< T > ( int  start,
int  end,
Operations  op,
ref T  reduce_to,
ActionRef< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a for loop inside a parallel region with a reduction. This is similar to For(), but the reduction allows multiple threads to reduce their work down to a single variable. Using ForReduction<T> allows the runtime to perform this operation much more efficiently than a naive approach using the Locking or Atomic classes. Each thread gets a thread-local version of the reduction variable, and the runtime performs a global reduction at the end of the loop. Since the global reduction only involves as many variables as there are threads, it is much more efficient than a naive approach. Acts as an implicit Barrier().

Template Parameters
TThe type of the reduction.
Parameters
startThe start of the loop, inclusive.
endThe end of the loop, exclusive.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ForReduction< T >() [2/2]

static void DotMP.Parallel.ForReduction< T > ( int  start,
int  end,
Operations  op,
ref T  reduce_to,
ForAction< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestaticprivate

Internal handler for ForReduction.

Parameters
startThe start of the loop, inclusive.
endThe end of the loop, exclusive.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
opThe operation to be performed in the case of reduction loops.
reduce_toThe variable to reduce to.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ForReductionCollapse< T >() [1/4]

static void DotMP.Parallel.ForReductionCollapse< T > ( (int, int)  firstRange,
(int, int)  secondRange,
Operations  op,
ref T  reduce_to,
ActionRef2< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Unlike Parallel.ForCollapse, this method permits a reduction parameter.

Template Parameters
TThe type of the reduction.
Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ ForReductionCollapse< T >() [2/4]

static void DotMP.Parallel.ForReductionCollapse< T > ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
Operations  op,
ref T  reduce_to,
ActionRef3< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Unlike Parallel.ForCollapse, this method permits a reduction parameter.

Template Parameters
TThe type of the reduction.
Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ ForReductionCollapse< T >() [3/4]

static void DotMP.Parallel.ForReductionCollapse< T > ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
(int, int)  fourthRange,
Operations  op,
ref T  reduce_to,
ActionRef4< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Unlike Parallel.ForCollapse, this method permits a reduction parameter.

Template Parameters
TThe type of the reduction.
Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
fourthRangeA tuple representing the start and end of the fourth for loop.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ ForReductionCollapse< T >() [4/4]

static void DotMP.Parallel.ForReductionCollapse< T > ( (int, int)[]  ranges,
Operations  op,
ref T  reduce_to,
ActionRefN< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null 
)
inlinestatic

Creates a collapsed reduction for loop inside a parallel region. A collapsed for loop can be used when you want to parallelize two or more nested for loops. Instead of only parallelizing across the outermost loop, the nested loops are flattened before scheduling, which has the effect of parallelizing across both loops. This has the effect multiplying the number of iterations the scheduler can work with, which can improve load balancing in irregular nested loops.

Unlike Parallel.ForCollapse, this method permits a reduction parameter.

Template Parameters
TThe type of the reduction.
Parameters
rangesA tuple representing the start and end of each of the for loops.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.
TooManyIterationsExceptionThrown if there are too many iterations to handle.

◆ GetChunkSize()

static uint DotMP.Parallel.GetChunkSize ( )
inlinestatic

Returns the current chunk size being used in a For() or ForReduction<T>() loop.

Returns
The chunk size being used in a For() or ForReduction<T>() loop. If 0, a For() or ForReduction<T>() has not been encountered yet.

◆ GetDynamic()

static bool DotMP.Parallel.GetDynamic ( )
inlinestatic

Gets whether or not the runtime is dynamically adjusting the number of threads.

Returns
Whether or not the runtime is dynamically adjusting the number of threads.

◆ GetMaxThreads()

static int DotMP.Parallel.GetMaxThreads ( )
inlinestatic

Gets the maximum number of threads that will be used in the next parallel region.

Returns
The maximum number of threads that will be used in the next parallel region.

◆ GetNested()

static bool DotMP.Parallel.GetNested ( )
static

Gets whether or not nested parallelism is enabled. There are no plans to implement nested parallelism at the moment.

Returns
Always returns false.

◆ GetNumProcs()

static int DotMP.Parallel.GetNumProcs ( )
inlinestatic

Gets the number of available processors on the host system.

Returns
The number of processors.

◆ GetNumThreads()

static int DotMP.Parallel.GetNumThreads ( )
inlinestatic

Gets the number of active threads. If not inside of a ParallelRegion(), returns 1.

Returns
The number of threads.

◆ GetSchedule()

static IScheduler DotMP.Parallel.GetSchedule ( )
inlinestatic

Returns the current schedule being used in a For() or ForReduction<T>() loop.

Returns
The schedule being used in the For() or ForReduction<T>() loop, or null if a For() or ForReduction<T>() has not been encountered yet.

◆ GetThreadNum()

static int DotMP.Parallel.GetThreadNum ( )
inlinestatic

Gets the ID of the calling thread.

Returns
The number of the calling thread.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ GetWTime()

static double DotMP.Parallel.GetWTime ( )
inlinestatic

Gets the wall time as a double, representing the number of seconds since the epoch.

Returns
The wall time as a double.

◆ InParallel()

static bool DotMP.Parallel.InParallel ( )
inlinestatic

Gets whether or not the calling thread is in a parallel region.

Returns
Whether or not the calling thread is in a parallel region.

◆ Master()

static void DotMP.Parallel.Master ( Action  action)
inlinestatic

Creates a master region. The master region is a region of code that is only executed by the master thread. The master thread is the thread with a thread ID of 0. You can get the thread ID of the calling thread with GetThreadNum().

Parameters
actionThe action to be performed in the master region.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ MasterTaskloop()

static void DotMP.Parallel.MasterTaskloop ( int  start,
int  end,
Action< int >  action,
uint?  grainsize = null,
uint?  num_tasks = null,
bool  only_if = true 
)
inlinestatic

Wrapper around Parallel.Master() and Parallel.Taskloop().

Parameters
startThe start of the taskloop, inclusive.
endThe end of the taskloop, exclusive.
actionThe action to be executed as the body of the loop.
grainsizeThe number of iterations to be completed per task.
num_tasksThe number of tasks to spawn to complete the loop.
only_ifOnly generate tasks if true, otherwise execute loop sequentially.
Exceptions
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ Ordered() [1/2]

static void DotMP.Parallel.Ordered ( Action  action,
[CallerFilePath] string  path = "",
[CallerLineNumber] int  line = 0 
)
inlinestatic

Creates an ordered region. An ordered region is a region of code that is executed in order inside of a For() or ForReduction<T>() loop. This also acts as an implicit Critical() region.

Parameters
actionThe action to be performed in the ordered region.
lineThe line number this method was called from.
pathThe path to the file this method was called from.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ Ordered() [2/2]

static void DotMP.Parallel.Ordered ( int  id,
Action  action 
)
inlinestatic

Creates an ordered region. An ordered region is a region of code that is executed in order inside of a For() or ForReduction<T>() loop. This also acts as an implicit Critical() region.

THIS METHOD IS NOW DEPRECATED.

Parameters
idThe ID of the ordered region. Must be unique per region but consistent across all threads.
actionThe action to be performed in the ordered region.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ ParallelFor()

static void DotMP.Parallel.ParallelFor ( int  start,
int  end,
Action< int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel for loop. Contains all of the parameters from ParallelRegion() and For(). This is simply a convenience method for creating a parallel region and a for loop inside of it.

Parameters
startThe start of the loop, inclusive.
endThe end of the loop, exclusive.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForCollapse() [1/4]

static void DotMP.Parallel.ParallelForCollapse ( (int, int)  firstRange,
(int, int)  secondRange,
Action< int, int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForCollapse() [2/4]

static void DotMP.Parallel.ParallelForCollapse ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
Action< int, int, int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForCollapse() [3/4]

static void DotMP.Parallel.ParallelForCollapse ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
(int, int)  fourthRange,
Action< int, int, int, int >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
fourthRangeA tuple representing the start and end of the fourth for loop.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForCollapse() [4/4]

static void DotMP.Parallel.ParallelForCollapse ( (int, int)[]  ranges,
Action< int[]>  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed for loop. Contains all of the parameters from ParallelRegion() and ForCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop.

Parameters
rangesA tuple representing the start and end of each of the for loops.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForReduction< T >()

static void DotMP.Parallel.ParallelForReduction< T > ( int  start,
int  end,
Operations  op,
ref T  reduce_to,
ActionRef< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel for loop with a reduction. Contains all of the parameters from ParallelRegion() and ForReduction<T>(). This is simply a convenience method for creating a parallel region and a for loop with a reduction inside of it.

Template Parameters
TThe type of the reduction.
Parameters
startThe start of the loop, inclusive.
endThe end of the loop, exclusive.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForReductionCollapse< T >() [1/4]

static void DotMP.Parallel.ParallelForReductionCollapse< T > ( (int, int)  firstRange,
(int, int)  secondRange,
Operations  op,
ref T  reduce_to,
ActionRef2< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForReductionCollapse< T >() [2/4]

static void DotMP.Parallel.ParallelForReductionCollapse< T > ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
Operations  op,
ref T  reduce_to,
ActionRef3< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForReductionCollapse< T >() [3/4]

static void DotMP.Parallel.ParallelForReductionCollapse< T > ( (int, int)  firstRange,
(int, int)  secondRange,
(int, int)  thirdRange,
(int, int)  fourthRange,
Operations  op,
ref T  reduce_to,
ActionRef4< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it.

Parameters
firstRangeA tuple representing the start and end of the first for loop.
secondRangeA tuple representing the start and end of the second for loop.
thirdRangeA tuple representing the start and end of the third for loop.
fourthRangeA tuple representing the start and end of the fourth for loop.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelForReductionCollapse< T >() [4/4]

static void DotMP.Parallel.ParallelForReductionCollapse< T > ( (int, int)[]  ranges,
Operations  op,
ref T  reduce_to,
ActionRefN< T >  action,
IScheduler  schedule = null,
uint?  chunk_size = null,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel collapsed reduction for loop. Contains all of the parameters from ParallelRegion() and ForReductionCollapse(). This is simply a convenience method for creating a parallel region and a collapsed for loop with a reduction inside of it.

Parameters
rangesA tuple representing the start and end of each of the for loops.
opThe operation to be performed on the reduction.
reduce_toThe variable to reduce to.
actionThe action to be performed in the loop.
scheduleThe schedule of the loop, defaulting to static.
chunk_sizeThe chunk size of the loop, defaulting to null. If null, will be calculated on-the-fly.
num_threadsThe number of threads to be used in the loop, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelMaster()

static void DotMP.Parallel.ParallelMaster ( Action  action,
uint?  num_threads = null 
)
inlinestatic

Wrapper around Parallel.ParallelRegion() and Parallel.Master().

Parameters
actionThe action to be performed in the parallel region.
num_threadsThe number of threads to be used in the parallel region, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelMasterTaskloop()

static void DotMP.Parallel.ParallelMasterTaskloop ( int  start,
int  end,
Action< int >  action,
uint?  grainsize = null,
uint?  num_tasks = null,
uint?  num_threads = null,
bool  only_if = true 
)
inlinestatic

Wrapper around Parallel.ParallelRegion(), Parallel.Master(), and Parallel.Taskloop().

Parameters
startThe start of the taskloop, inclusive.
endThe end of the taskloop, exclusive.
actionThe action to be executed as the body of the loop.
grainsizeThe number of iterations to be completed per task.
num_tasksThe number of tasks to spawn to complete the loop.
num_threadsThe number of threads to be used in the parallel region, defaulting to null. If null, will be calculated on-the-fly.
only_ifOnly generate tasks if true, otherwise execute loop sequentially.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelRegion()

static void DotMP.Parallel.ParallelRegion ( Action  action,
uint?  num_threads = null 
)
inlinestatic

Creates a parallel region. The body of a parallel region is executed by as many threads as specified by the num_threads parameter. If the num_threads parameter is absent, then the runtime checks if SetNumThreads has been called. If so, it will use that many threads. If not, the runtime will try to use as many threads as there are logical processors.

Parameters
actionThe action to be performed in the parallel region.
num_threadsThe number of threads to be used in the parallel region, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ ParallelSections()

static void DotMP.Parallel.ParallelSections ( uint?  num_threads = null,
params Action[]  actions 
)
inlinestatic

Creates a parallel sections region. Contains all of the parameters from ParallelRegion() and Sections(). This is simply a convenience method for creating a parallel region and a sections region inside of it.

Parameters
actionsThe actions to be performed in the parallel sections region.
num_threadsThe number of threads to be used in the parallel sections region, defaulting to null. If null, will be calculated on-the-fly.
Exceptions
CannotPerformNestedParallelismExceptionThrown if ParallelRegion is called from within another ParallelRegion.
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ Sections()

static void DotMP.Parallel.Sections ( params Action[]  actions)
inlinestatic

Creates a sections region. Sections allows for the user to submit multiple, individual tasks to be distributed among threads in parallel. In parallel, each thread active will dequeue a callback and execute it. This is useful if you have lots of individual tasks that need to be executed in parallel, and each task requires its own lambda. Acts as an implicit Barrier().

Parameters
actionsThe actions to be performed in the sections region.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.

◆ SetDynamic()

static void DotMP.Parallel.SetDynamic ( )
inlinestatic

Tells the runtime to dynamically adjust the number of threads.

◆ SetNested()

static void DotMP.Parallel.SetNested ( bool  _)
inlinestatic

Enables nested parallelism. This function is not implemented, as nested parallelism does not exist in the current version of DotMP. There are no plans to implement nested parallelism at the moment.

Parameters
_Unused.
Exceptions
NotImplementedExceptionIs always thrown.

◆ SetNumThreads()

static void DotMP.Parallel.SetNumThreads ( int  num_threads)
inlinestatic

Sets the number of threads that will be used in the next parallel region.

Parameters
num_threadsThe number of threads to be used in the next parallel region.

◆ Single() [1/2]

static void DotMP.Parallel.Single ( Action  action,
[CallerFilePath] string  path = "",
[CallerLineNumber] int  line = 0 
)
inlinestatic

Creates a single region. A single region is only executed once per Parallel.ParallelRegion. The first thread to encounter the single region marks the region as encountered, then executes it.

Parameters
actionThe action to be performed in the single region.
lineThe line number this method was called from.
pathThe path to the file this method was called from.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.

◆ Single() [2/2]

static void DotMP.Parallel.Single ( int  id,
Action  action 
)
inlinestatic

Creates a single region. A single region is only executed once per Parallel.ParallelRegion. The first thread to encounter the single region marks the region as encountered, then executes it.

THIS METHOD IS NOW DEPRECATED.

Parameters
idThe ID of the single region. Must be unique per region but consistent across all threads.
actionThe action to be performed in the single region.
Exceptions
NotInParallelRegionExceptionThrown when not in a parallel region.
CannotPerformNestedWorksharingExceptionThrown when nested inside another worksharing region.

◆ Task()

static TaskUUID DotMP.Parallel.Task ( Action  action,
params TaskUUID[]  depends 
)
inlinestatic

Enqueue a task into the task queue. Differing from OpenMP, there is no concept of parent or child tasks as of yet. All tasks submitted are treated equally in a central task queue.

Parameters
actionThe task to enqueue.
dependsList of dependencies for the task.
Returns
The task generated for use as a future dependency.

◆ Taskloop()

static TaskUUID [] DotMP.Parallel.Taskloop ( int  start,
int  end,
Action< int >  action,
uint?  grainsize = null,
uint?  num_tasks = null,
bool  only_if = true,
params TaskUUID[]  depends 
)
inlinestatic

Creates a number of tasks to complete a for loop in parallel. If neither grainsize nor num_tasks are specified, a grainsize is calculated on-the-fly. If both grainsize and num_tasks are specified, the num_tasks parameter takes precedence over grainsize.

Parameters
startThe start of the taskloop, inclusive.
endThe end of the taskloop, exclusive.
actionThe action to be executed as the body of the loop.
grainsizeThe number of iterations to be completed per task.
num_tasksThe number of tasks to spawn to complete the loop.
only_ifOnly generate tasks if true, otherwise execute loop sequentially.
dependsList of task dependencies for taskloop.
Returns
List of tasks generated by taskloop for use as future dependencies.
Exceptions
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

◆ Taskwait()

static void DotMP.Parallel.Taskwait ( params TaskUUID[]  tasks)
inlinestatic

Wait for selected tasks in the queue to complete, or for the full queue to empty if no tasks are specified. Acts as an implicit Barrier() if it is not called from within a task.

Parameters
tasksThe tasks to wait on.
Exceptions
ImproperTaskwaitUsageExceptionThrown if a parameter-less taskwait is called from within a thread, which leads to deadlock.


◆ ValidateParams()

static void DotMP.Parallel.ValidateParams ( int  start = 0,
int  end = 0,
IScheduler  schedule = null,
uint?  num_threads = null,
uint?  chunk_size = null,
uint?  num_tasks = null,
uint?  grainsize = null 
)
inlinestaticprivate

Validates all parameters passed to DotMP functions.

Parameters
startStart of loop.
endEnd of loop.
scheduleScheduler used.
num_threadsNumber of threads.
chunk_sizeChunk size.
num_tasksNumber of tasks.
grainsizeGrainsize.
Exceptions
InvalidArgumentsExceptionThrown if any provided arguments are invalid.

Member Data Documentation

◆ barrier

volatile Barrier DotMP.Parallel.barrier
staticprivate

Barrier object for DotMP.Parallel.Barrier()

◆ canceled

volatile bool DotMP.Parallel.canceled = false
staticpackage

Determines if the current threadpool has been marked to terminate.

◆ critical_lock

volatile Dictionary<string, object> DotMP.Parallel.critical_lock = new Dictionary<string, object>()
staticprivate

The dictionary for critical regions.

◆ num_threads

volatile uint DotMP.Parallel.num_threads = 0
staticprivate

Number of threads to be used in the next parallel region, where 0 means that it will be determined on-the-fly.

◆ ordered

volatile Dictionary<string, int> DotMP.Parallel.ordered = new Dictionary<string, int>()
staticprivate

The dictionary for ordered regions.

◆ single_thread

volatile HashSet<string> DotMP.Parallel.single_thread = new HashSet<string>()
staticprivate

The dictionary for single regions.

◆ task_nesting

ThreadLocal<uint> DotMP.Parallel.task_nesting = new ThreadLocal<uint>(() => 0)
staticprivate

The level of task nesting, to determine when to enact barriers and reset the DAG.

◆ thread_num

ThreadLocal<int> DotMP.Parallel.thread_num = new ThreadLocal<int>(() => Convert.ToInt32(Thread.CurrentThread.Name))
staticprivate

Current thread num, cached.


The documentation for this class was generated from the following file: