Showing posts with label fork. Show all posts
Showing posts with label fork. Show all posts

Tuesday, 19 April 2016

Getting COPRTHR and MPI running

In this post I get to grips with COPRTHR with the new MPI library installed and solve the puzzle as to why direct calls to the epiphany SDK did not work. There were some tricks to getting COPRTHR working but some unexpected benefits as well.


Introduction


Working towards my goal of a general purpose infrastructure to wrap around my neural network simulator, I did some investigation into the Message Passing Interface (MPI) and direct memory access (DMA) memory transfers. The plan is to use MPI to provide the signalling between the host and epiphany and between epiphany cores. The DMA part comes in as a way to parallelise the processing so that you are transferring the next data set into local memory while you are processing the last set. DMA is also a lot quicker for larger data sets than individual reads.

To get MPI working I decided to bite the bullet and get into COPRTHR. While this was a little confusing at first once I'd gotten everything wired properly it all made sense. If you understand stdcl the overall structure is the same and so far, I have not found anything that you can do in stdcl that you cannot do in COPRTHR.

I also want to use MPI because it allows asynchronous data transfer between cores. While direct memory writes are fast, the core being written to plays no part in the process and thus other coordination methods are required to ensure that the data has arrived before it is used. It also requires that the core doing the writing writes to the correct location. This is fine if all cores have the same code base but if they do not there needs to be signaling between sender and receiver to ensure that the sender is not writing over the wrong data (or program code!). MPI provides the abstraction needed to ensure an orderly, arms-length transfer between cores.

There was an unexpected benefit in that COPRTHR has a call that allows you to allocate a chunk of memory from within you kernel. While you still have the option to dynamically change your program and recompile on the fly as I described in a previous post, there is a call that allows you access to unused memory on the epiphany which, if used carefully, can store data for local processing.


Preliminaries


The programs developed for this blog post were done on the 2015.01.30 image using COPRTHR version 1.6.2. The 2016.03 image was release recently so I'll install that soon and update anything that fails (which I don't expect it to do). Please also be aware that version 2.0 of COPRTHR is in the works so check your version before proceeding.

You can have a look at the example code or to clone the source files execute:

git clone https://github.com/nickoppen/coprthr_basic.git


Installing MPI


MPI has been around for a number of years and is a set of calls, implemented on different systems that allows message passing between those systems. Brown Deer brought out a COPRTHR extension recently that implemented those calls for message passing between epiphany cores. While it is an extension to version 1.6.2 I expect that COPRTHR version 2 will have MPI built in with greater functionality (although no details have been release as yet).

Get the COPRTHR mpi extension from here:


http://www.browndeertechnology.com/code/bdt-libcoprthr_mpi-preview.tgz

Just download it, un-compress it somewhere out of the way and run the install script as root: 

sudo ./install.sh


Using eSDK calls in an OpenCL program


You can actually use eSDK calls directly from your OpenCL kernel code. However, I found that there is an inconsistency between the layout of the eSDK and the hard wired compile scripts used by the clcc compiler.

To test if this inconsistency exists on your system, fetch the latest version of the parallella examples from github which includes a couple MPI examples:

cd
git fetch parallella-examples
cd parallella-examples/mpi-nbody
make

If this compiles without errors then you are fine.

If you get errors such as:

undefined reference to `e_dma_copy(void*, void*, unsigned long)
undefined reference to `e_mutex_unlock(unsigned int, unsigned int, int*) and
undefined reference to `e_mutex_unlock(unsigned int, unsigned int, int*)

try the following:

ls /opt/adapteva/esdk/tools/e-gnu/epiphany-elf/sys-include

if this gives you "No such file or directory" run the following (note: this is a single command without any breaks):

sudo ln -s /opt/adapteva/esdk/tools/e-gnu/epiphany-elf/include /opt/adapteva/esdk/tools/e-gnu/epiphany-elf/sys-include

Then run:

make clean
make

again from the mpi-nbody directory to check if that fixed it. 

If that does not work?


This was the problem with my installation. The way I discovered it (thanks to jar) was by using the cldebug tool:

cldebug -t ./temp -- clcc --coprthr-cc -mtarget=e32 -D__link_mpi__ --dump-bin -I/opt/adapteva/esdk/tools/e-gnu/epiphany-elf/include -I ./ -DECORE=16 -DNPARTICLE=2048 -DSTEPS=32 -DSIZE=1 -DMPI_BUF_SIZE=1024 -DCOPRTHR_MPI_COMPAT mpi_tfunc.c

This looks very complicated but it really just calls the clcc command on the kernel file. The output looks even more horrendous but it contains the output of all the steps that the Brown Deer pre-processor goes through to produce a working executable.

If there is still something missing in your compile, sift through all of this text and look for something that is wrong. Given that I was missing a reference, I focused on include files and directories which occur after -I and -i switches. I found that it was including sys-include which didn't exist on my system. Instead, the .h files (e_dma.h and e_mutex.h) were in a directory called include. To fix it with minimum disruption, I created a soft link (ln -s) of the existing include and called it sys-include. Thus sys-include was an alias for include and the compiler was happy.


Compiling


Below are the steps changes you need to make in Code::Block to get a program to compile using COPRTHR. If you have not used Code::Blocks I suggest that you have a read through my post about setting it up from scratch. In this post I'm only going to cover the changes that you need to make to the stdcl set up described there. If you are an emacs/make fan, I've included a Makefile along with the example code.


Compiler Options


Kernel Compilation

The first two changes are critical for correct compilation of your kernel code. Go to the Compiler Settings menu (Settings | Compiler) and selection the compiler you have set up for compiling with clcc (mine is called BD OpenCL).

Under the Compiler Settings tab choose the Other options tab and type the switches shown below:


Then under the #defines tab define the following two definitions:




Host Application Compilation

Go back to the gcc compiler settings and change the Linker settings tab to include the COPRTHR link libraries (I'm not sure what the m library is for. It is in all the examples so I included it for good measure.):




Then under Search directories maker sure that the compiler can find your included header files:


And that the linker can find the COPRTHR libraries:





Project Properties


As always in a new project, you need to define a build target but in this case the output attributes are not important:



This is the tricky bit. The compiler, with the switches defined above, generates a binary file in the Execution working directory (your project directory by default). This file has the rather verbose extension of bin.3.e32 tacked onto the end of your kernel file name. Forget the .o file generated by the compiler, it is the bin.3.e32 file that you need at execution time. If your host application is being generated and run from your bin/Debug or bin/Release directory, you must copy the bin.3.e32 file to the same place.

To do this use the Post-build steps with a copy command and while you are at it, make the file readable by everyone. (Note: my kernel file is called pfunc.c. Replace this with your kernel file name.)


Also, while you are in this box and after saving your new settings, pop up to the root project settings (coprthr_basic in this example) and switch off All Warnings (-Wall). It will just be annoying.

Then, because we are not interested in the .o or the linker output, switch off linking for the kernel file or files.




Using COPRTHR Host-side


The overall structure of a COPRTHR program is the same as an stdcl program as described in my previous post. The steps you need to go through are also the same but with different calls.


Loading and Compiling


Stdcl has the commands clopen for opening the epiphany and to load a pre-compiled and JIT compiled kernels and clsopen for a kernel stored as an internal string.

COPRTHR has two commands:

int coprthr_dopen( const char* path, int flags); to open the epiphany (which should be matched with a coprthr_dclose()) where path is the #defined value COPRTHR_DEVICE_E32 and flags is COPRTHR_O_THREAD. The integer returned is the device descriptor that is needed in a number of subsequent calls.

and at least one of the following:

coprthr_program_t coprthr_cc_read_bin( const char* path, int flags ); to open a pre-compiled object file (our bin.3.e32 file), where path is the file location of the binary and flags is zero. The return value is a handle to the compiled file.

or

coprthr_program_t coprthr_cc( const char* src, size_t len, const char* opt, char** log ); compiles a string (src) using the compile options (opt) returning handle to the program. The log parameter is a pointer the compiler output. Pass in a NULL pointer and a suitable amount of space will be allocated (which must be then freed).


e.g.

int dd = coprthr_dopen(COPRTHR_DEVICE_E32, COPRTHR_O_THREAD);
printf("dd=%d\n",dd);
if (dd<0)
{
printf("device open failed\n");
exit(0);
}

coprthr_program_t prg;
prg = coprthr_cc_read_bin("./pfunc.cbin.3.e32", 0);
printf("prg=%p \n",prg);
if (!(prg))
{
printf("file mpi_pfunc.cbin.3.e32 not found\n");
coprthr_dclose(dd);
exit(0);

}

There is no call to compile source code in a file so read it into a string and compile it from there.

Allocating Shared Memory


Similar to the stdcl call clmalloc, COPRTHR has the call the call:

coprthr_mem_t coprthr_dmalloc(int dd, size_t size, int flags); where a memory space of size is allocated on the device (dd) returning a handle of type coprthr_mem_t. The argument flags is not used.

I call it a handle rather than a pointer because it is really not working memory. If you try and use it as working memory you will be bitterly disappointed. The only way to write to and read from it is with the calls described in the next section.

Shared memory can be resized using the coprthr_drealloc(dd, size, flags) call and should be freed using coprthr_dfree(int dd, coprthr_mem_t mem) when you are done.

e.g.

coprthr_mem_t p_data_mem = coprthr_dmalloc(dd, WIDTH*sizeof(float), 0);

Writing to and Reading From Shared Memory


In stdcl you would declare a variable of type cl_T *  where T is a compatible simple type (e.g.int or float), call clmalloc(), initialise it in your host application and then call clmsync to transfer the data to shared memory.

COPRTHR does things a little differently. The handle returned by coprthr_dmalloc refers to memory in shared space only. To initialise it you need to declare, allocate and write to memory in your host program and then call:


coprthr_event_t coprthr_dwrite(int dd, coprthr_mem_t mem, size_t offset, void* ptr, size_t len, int flags); where dd is your device descriptor, mem is your memory handle, offset is how far into the shared memory you want to start writing, ptr is a pointer to your host storage the contents of which you want written into shared memory, len is the length (in bytes) of the data you want written and flags is one of COPRTHR_E_NOWAIT or COPRTHR_E_WAIT. The return value is an event handle which can be used in the call coprthr_dwaitevent(dd, event).

The contents of your host memory will be written to shared memory. In a similar way, once your kernel has finished its computation and written its results back, you call:

coprthr_event_t coprthr_dread(int dd, coprthr_mem_t mem, size_t offset, void* ptr,
size_t len, int flags); where the arguments and return values are the same except of course the data gets read from shared memory into your host memory.

There is also coprthr_dcopy(...) which copies data from one device's shared memory into another but that is not relevent on the Parallella given that there is only one device.

e.g.

float host_buffer[WIDTH];
for (i=0; i < WIDTH; i++) host_buffer[i] = -1 * i; /// write data to shared DRAM coprthr_mem_t p_data_mem = coprthr_dmalloc(dd, WIDTH*sizeof(float), 0); coprthr_dwrite(dd, p_data_mem, 0, host_buffer, WIDTH*sizeof(float), COPRTHR_E_WAIT);

followed later by:

coprthr_dread(dd, p_data_mem, 0, host_buffer, WIDTH*sizeof(float),COPRTHR_E_WAIT);

Retrieving a Kernel


To call a kernel you need to retrieve a link to it from the program file you just read in or compiled. Similar to stdcl clsym call COPRTHR has:

coprthr_sym_t coprthr_getsym( coprthr_program_t program, const char* symbol); where program is the returned program handle and symbol is the name of the kernel. The return value is the handle to the kernel.

e.g.

coprthr_sym_t thr = coprthr_getsym(prg,"p_func_thread_mpi");
if (thr == (coprthr_sym_t)0xffffffff)
{
printf("kernel p_func_thread not found\n");
coprthr_dclose(dd);
exit(0);

}


Calling a Kernel


There are three ways to call a kernel or kernels. The MPI library uses the call:

coprthr_mpiexec(int dd, int thrCount, comprthr_sym_t thr, void * args, int size, int flags); where dd is the device descriptor, thrCount is the number of threads you want to launch, thr is the handle to the kernel you are calling, args is a type-less pointer to a structure containing the kernels arguments (see next paragraph), size is the size (in bytes) of the structure you are passing and flags (presumably because I have not seen them used) is one of  COPRTHR_E_NOWAIT or COPRTHR_E_WAIT. I also presume that the call returns a variable of type coprthr_event_t.

To use the args variable in this call you need to typedef a structure that contains the variables you want to pass to your kernel. Do this in a separate include file and include it in both the host file and kernel file. Declare a variable of this type in your host program in host memory. The simple data types (int, float etc) can be written to the structure but the arrays need to be allocated in shared memory and initialised as described above. The structure is then passed to the kernel as a type-less pointer which can then be cast back to the structure type inside your kernel code.

e.g.

my_args_t args = {
.n = 2.0,
.width = WIDTH,
.p_mem = coprthr_memptr(p_data_mem, 0),
};

coprthr_mpiexec(dd, ECORES, thr, &args, sizeof(args), 0);



The alternative ways are the COPRTHR calls that were available before the MPI library appeared. These are:

coprthr_event_t coprthr_dexec( int dd, coprthr_kernel_t krn, unsigned int nargs, void** args, unsigned int nthr, void* reserved, int flags ); where dd is the device descriptor, krn is the handle to the kernel, nargs is the number of elements in the args variable (see next paragraph), args is a pointer to an array of pointers of values to be passed to the kernel, nthr is the number of times the kernel should be executed, reserved is not used and flags is one of  COPRTHR_E_NOWAIT or COPRTHR_E_WAIT. The return value is an event that can be used used in the call coprthr_dwaitevent(dd, event).

and:

coprthr_event_t coprthr_dnexec( int dd, unsigned int nkrn, coprthr_kernel_t v_krn[], unsigned int v_nargs[], void** v_args[], unsigned int v_nthr[], void* v_reserved[], int flags); which allows a number of different kernels to be executed at once. Each array is a is a collection of the same arguments in the dexec call.

The big difference between mpiexec and dexec is how the arguments are passed. The dexec call only accepts arrays and those arrays have to be in shared memory. This means that passing singleton data via a one element array which is a bit clumsy. But remember, if you use mpiexec to call the kernel, you don't have to use any MPI.

e.g.

float n[] = { 2.0 };
int width[] = { WIDTH };
coprthr_mem_t p_arg_n = coprthr_dmalloc(dd, sizeof(float), 0);
coprthr_mem_t p_arg_width = coprthr_dmalloc(dd, sizeof(int), 0);
coprthr_dwrite(dd, p_arg_n,0,&n, sizeof(float), COPRTHR_E_WAIT);
coprthr_dwrite(dd, p_arg_width, 0, &width, sizeof(int),   COPRTHR_E_WAIT);
void * p_args[] = { &p_arg_n, &p_arg_width, &p_data_mem };

coprthr_dexec(dd, thr, 3, p_args, ECORES, 0, COPRTHR_E_WAIT);


Changes to Kernel Code


Finally, now that you have called your kernel, you need to retrieve your arguments. Remember that shared memory is slow so if you want to use them more than once it is better to make a local copy.

Kernels called with coprthr_mpiexec you need to cast the single kernel argument as your defined structure:


__kernel void p_func_thread_mpi( void * p_args )
{
my_args_t* pargs = (my_args_t*)p_args;
...
}


With coprthr_dexec you need to de-reference the singleton arguments:


__kernel void p_func_thread_std( float * p_arg_n,  int* p_arg_width, void * p_mem )
{
float n = p_arg_n[0];
int cnt = p_arg_width[0] / ECORES;
float * g_pmem = (float *)p_mem;  /// to be used in dma copy
...
}

Reserving Local Memory


In stdcl there is no way to dynamically allocate space other than with source code modification and dynamic compilation. COPRTHR has two calls that allows you to use free space in local memory:

void * coprthr_tls_sbrk(int size); which returns the contents of the system variable containing the first byte of free memory and sets the free memory address system variable to be size bytes further along

and:

coprthr_tls_brk(void * addr); sets the system variable containing address of the first byte of free memory to be addr.

These calls should be used as following (if you are into neatness):

/// store the starting position of free space
void * memfree = coprthr_tls_sbrk(0);

/// grab a chunk of free space for local storage
float * localSpace = (float*)coprthr_tls_sbrk(cnt * sizeof(float));

/// do something with the space


/// reset the free space pointer to where it was initially
coprthr_tls_brk(memfree);

WARNING: The value returned by coprthr_tls_sbrk is an actual memory address. Remember this is bare metal programming with no comfy operating system preventing you writing data where you shouldn't. 


Copying using DMA


Once you have a reference to shared memory (called global memory in OpenCL terms) and allocated some space, you need to copy it into memory local to the core. The eSDK documentation shows the basic synchronous copy command:

int e_dma_copy(void *dst, void *src, size_t bytes); which copies bytes of data from src to dst returning 0 if successful. This is a synchronous copy using the E_DMA_1 channel.

In our example the src is the pointer to global memory and the dst is the memory address returned by coprthr_tls_sbrk. You'd better make sure that bytes in e_dma_copy is the same as size in coprthr_tls_sbrk.

e.g.

e_dma_copy(localSpace, g_pmem + (cnt * rank), cnt*sizeof(float));

Asynchronous use of the DMA channels requires the use of:


int e_dma_start(e_dma_desc_t *descriptor, e_dma_id_t chan);
int e_dma_busy(e_dma_id_t chan);
void e_dma_wait(e_dma_id_t chan);
void e_dma_set_desc(e_dma_id_t chan, unsigned config, e_dma_desc_t *next_desc, unsigned stride_i_src, unsigned stride_i_dst, unsigned count_i, unsigned count_o, unsigned stride_o_src, unsigned stride_o_dst, void *addr_src, void *addr_dst, e_dma_desc_t *descriptor);

Which I will cover in a future post.

Final Thoughts


This post is more a how-to post on using COPRTHR and a toe-in-the-water look at MPI and DMA copying. While I'm convinced that the combination will lead to a better structured and faster kernels, I'm not sure by how much and at what cost in terms of the complexity of the program.

My next step with be to compare MPI with direct memory writes along the lines of my data passing experiment. Until then, please let me know if I've missed anything.

Saturday, 16 August 2014

What was that NDRange thing?

Digging a little deeper and pulling apart the OpenCL parts of the Mandelbrot Example

(This blog post refers to version 1.6.0 of the Brown Deer OpenCL libraries on the Parallella. If you are using newer versions there could be significant differences.)

I thought I'd stop and go back over the Mandelbrot example and make sure I understood how the thing worked. 

There were two things that seemed odd. Firstly, there was only one fork command (or rather a forka command) and I thought there would have been 16 and secondly, what was that clndrange_init1D command doing?

Turns out that the two are intimately linked. The ndRange controls the number of calls to your kernel and the fork kicks off the process. If you use forka you can pass in additional arguments. 

The nd part is short for n-Dimensional so the clndrange_init1D seems to suggest that the space created in the malloc statement as a 1-Dimensional space. Therefore, clndrange_init2D and clndrange_init3D would somehow create 2D and 3D spaces - or that is what I thought. It turns out that things are not exactly as clever as they might first appear.

Let's start from the begining. OpenCL has the standard call:

clEnqueueNDRangeKernel(cl_command_queue queue,
cl_kernel kernel,
cl_uint work_dims
const size_t *global_work_offset,
const size_t *global_work_size
const size_t *local_work_size,
cl_uint num_events, 
const cl_event *wait_list, 
cl_event *event);


Where:
  • work_dims is the number of dimensions (1, 2 or 3)
  • global_work_offset is the global ID offsets in each dimension
  • global_work_size is the number of work-items in each dimension
  • local_work_size is the number of work-items in a work-group, in each dimension

The last three are pointers to arrays of integers with for example global_work_offset[0] referring to the 1st dimension, global_work_offset[1] referring to the 2nd dimension etc.
(Note: this is the "official" version of what these are supposed to do)


The Brown Deer stdcl library replaces this one call with a choice of:

clndrange_init1d(gtoff0, gtsz0, ltsz0) OR
clndrange_init2d(gtoff0, gtsz0, ltsz0, gtoff1, gtsz1, ltsz1) OR
clndrange_init3d(gtoff0, gtsz0, ltsz0, gtoff1, gtsz1, ltsz1, gtoff2, gtsz2, ltsz2)

followed by:

clfork(context, devnum, kernel, ndrange, flags ) OR, if you want to pass additional arguments:
clforka(context, devnum, kernel, ndrange, flags [, arg0, ..., argN ]) 

where 

gtoff0 is the global_work_offset for dimension zero
gtsz0 is the global_work_size for dimension zero and
ltsz0 is the local_work_size for dimension zero etc.

So clndrange_init?d is a convenient way of setting those variables when is then passed into the fork command via the clndrange_t* variable.

So, what does it actually do?


clndrange_init1D


I started with a 1 dimensional experiment. After much digging and experimentation here's the DIRT!

global_work_offset does NOTHING, Seriously! Don't believe me... check out the khronos documentation here: http://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/clEnqueueNDRangeKernel.html
global_work_size is the number of times your kernel will be called (one call in opencl talk is one work-item) and
local_work_size is the number returned to you in your kernel when you call get_local_size(0)

That's it! No fancy partitioning of the data space. Nothing. It took me quite a while to realise that it was that simple. 

There are some quirks to this call. The combination of global_work_size and local_work_size are a bit sensitive. I have not tried every combination available but you often get a segmentation fault due to changes in both the global_work_size and local_work_size. For global_work_size less than 16 it seems to have problems if it and the local_work_size are not equal. Greater than 16 you could get an error "Exceeded maximum thread block size" but apart from getting an error, it seems to still work sometimes. It seems to work if global_work_size is a multiple of 16. If things are not right you can encounter Segmentation faults and Invalid Instruction faults or you may find that your kernel does not seem to be called at all. (I have used 16 here because I'm using an Epiphany-16. I should be referring to the number of cores on your accellerator.)

Let's have a look what the mandelbrot example did.

First, it allocated some space in the standard accelerator context stdacc with:

cl_uchar* pixels = (cl_uchar*) clmalloc(stdacc, width * height * 3, 0);

(The *3 is because it creates an RGB bit map which needs 3 bytes per pixel.)

After grabbing the compiled kernel from the context with clsym(...), it calls:

clndrange_t ndr = clndrange_init1d(0, height, 16);

so it wants is kernel to be called height times. The 16 is actually ignored because the kernel never calls get_local_size(0).

Finally it calls:

clforka(stdacc, 0, krn, &ndr, CL_EVENT_WAIT, iterations, width, startx, starty, dx_over_width, dy_over_height, pixels);

where it passes in the width of each line as an argument along with the data structure, pixels. The kernel then generates one horizontal line of the final picture using the call get_global_id(0) to determine which line it is on. (A slightly cleaner way would have been to pass width as the third argument in the clndrange_init1d() and then to call get_local_size(0)).

So much for clndrange_init1D. The mandelbrot set is a good example of a problem where the calculation of each point is independent of the next. If you have such a problem then this is a simple model that would be sufficient.

Next I tried a 2 dimensional version.


clndrange_init2D


Let me first say that I found clndrange_init2D to be a little temperamental. It seems that some combinations of values result in the kernel not being called at all. What's more, once it is not working, regressing to the previous state did not result it in working again. It was very frustrating. Therefore, everything written below must be read in with the knowledge that I gave up without actually understanding what was going on.

The 2D call seems (see previous paragraph) to call the kernel gtsz0 multiplied by gtsz1 times. The local size values ltsz0 and ltsz1 are merely returned by get_local_size(0|1).

clndrange_init2d( gtoff0, gtsz0, ltsz0, gtoff1, gtsz1, ltsz1)

gtoff0: NULL! Nothing to do here
gtsz0: the number of calls for the first dimension 
ltsz0: the value returned by calling get_local_size(0)
gtoff1: NULL, as you might expect
gtsz1: the number of calls for the second dimension (one for each gtsz0 call)
ltsz1: the number returning by calling get_local_size(1)

I didn't go into clndrange_init3D but I'd lay money on it working in the same was as clndrange_init2D.


nD Example 


I took the mandelbrot example and removed all the fancy mathematics. I allocated two chunks of global memory and wrote a 1D kernel and a 2D kernel that just initialised the space to a given value.

The critical bits are:

int  bytesPerCore = 16; // how many bytes we want each core to process
int workItems = 32;     // the total number workItems (threads sent to the Epiphany)

wrkArea1D = (cl_uchar*) clmalloc(stdacc, workItems * bytesPerCore, 0);
wrkArea2D = (cl_uchar*) clmalloc(stdacc, workItems * bytesPerCore, 0);

clndrange_t ndr1D = clndrange_init1d(NULL, workItems, bytesPerCore); 
clndrange_t ndr2D = clndrange_init2d(NULL, workItems, bytesPerCore/4, NULL, workItems, bytesPerCore/4);

and then in a common function call:

clforka(stdacc, 0, krn, ndr, CL_EVENT_WAIT, 1, rows, cols, wrkArea);

where 
krn is either "k_init1D" or "k_init2D"
ndr is either ndr1D or ndr2D and
wrkArea is either wrkArea1D or wrkArea2D

The kernel k_init1D works in the same way as the mandelbrot example processing 16 bytes of data in each of the 32 calls using get_global_id(0) to figure out where it should be.

The k_init2D kernel breaks the data set of the same size into 4x4 "tiles" so that adjacent data could be processed at the same time. I thought that I could cast the global data into a 2 dimensional array but that didn't work so I had to do all of the offset arithmetic in code. While this is not difficult it did make the 2D kernel considerably longer and given that speed is of the essence I would suggest that the only reason to do this would be if the algorithm overall would work better in 2D than in 1D (or that you dig unnecessary complexity).

In the resultant data sets I include the value of get_global_id(0) to show which call processed which chunk of data. The 1D data has the global id in the first column and the result of get_local_size(0) in the second column. The first five lines are:

0       16      1       1       1       1       1       1       1       1       1       1       1       1       1       1
1       16      1       1       1       1       1       1       1       1       1       1       1       1       1       1
2       16      1       1       1       1       1       1       1       1       1       1       1       1       1       1
3       16      1       1       1       1       1       1       1       1       1       1       1       1       1       1
4       16      1       1       1       1       1       1       1       1       1       1       1       1       1       1

The 2D data set includes the result of get_global_id(1) which shows it jumping around between 28 and 31. This is the value returned by get_global_id(1) as at the last call to the kernel. The top left value is the value returned by get_global_id(0) and the next value is value returned by get_global_id(1). The results from the first four tiles (1 to 3) are:

0       28     1       1       1       31     1       1       2       31     1       1       3       29     1       1
1       1       1       1       1       1       1       1       1       1       1       1       1       1       1       1
1       1       1       1       1       1       1       1       1       1       1       1       1       1       1       1
1       1       1       1       1       1       1       1       1       1       1       1       1       1       1       1

The 2D call was also a lot more sensitive than the 1D. As I mentioned above some combinations of values in the 2D version work and some don't. I suspect that the total number of calls must be a multiple of the number of cores available. So, before writing a big kernel with the assumption of clndrange_init2D working, think about how you want to process your data and check that a simple version works first and 

The nD example code


The code for the example is a little long to list here so grab it from github. From you eclipse workspace directory execute:

git clone https://github.com/nickoppen/nD.git

load it into eclipse and push, pull, tweak and generally rip it up till you are an ndrange EXPERT. And, if I've gotten anything wrong, please let me know.


Up Next


I don't know about you but I'm getting a bit tired of waiting around for Eclipse. AndyC posted a procedure on getting Code::Blocks working on the Parallella (http://forums.parallella.org/viewtopic.php?f=13&t=1658). 

I've installed Code::Blocks and it does perform better than Eclipse. Andy's procedure is designed for the standard SDK and I got into a mess when I tried to compile an OpenCL program on it so I'm going to have to do some tweaking. If I make some progress I'll write a post about it.