README.md 5.88 KB
Newer Older
1 2 3
# P³LS³
**P**redictable **P**arallel **P**atterns **L**ibrary for **S**calable **S**mart **S**ystems

4
[![pipeline status](http://lab.las3.de/gitlab/las3/development/scheduling/predictable_parallel_patterns/badges/master/pipeline.svg)](http://lab.las3.de/gitlab/las3/development/scheduling/predictable_parallel_patterns/commits/master)
5

6 7 8 9
## Getting Started

This section will give a brief introduction on how to get a minimal
project setup that uses the PLS library.
10 11
Further [general notes](NOTES.md) and [performance notes](PERFORMANCE.md) can be found in
their respective files.
12

13 14 15 16
Further notes on [performance](PERFORMANCE.md) and general
[notes](NOTES.md) on the development progress can be found in
the linked documents.

17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
### Installation

Clone the repository and open a terminal session in its folder.
Create a build folder using `mkdir cmake-build-release`
and switch into it `cd cmake-build-release`.
Setup the cmake project using `cmake ../ -DCMAKE_BUILD_TYPE=RELEASE`,
then install it as a system wide dependency using `sudo make install.pls`.

At this point the library is installed on your system.
To use it simply add it to your existing cmake project using
`find_package(pls REQUIRED)` and then link it to your project
using `target_link_libraries(your_target pls::pls)`.

### Basic Usage

```c++
#include <pls/pls.h>
#include <iostream>

long fib(long n);

int main() {
    // All memory needed by the scheduler can be allocated in advance either on stack or using malloc.
    const unsigned int num_threads = 8;
    const unsigned int memory_per_thread = 2 << 14;
    static pls::static_scheduler_memory<num_threads, memory_per_thread> memory;

    // Create the scheduler instance (starts a thread pool).
    pls::scheduler scheduler{&memory, num_threads};

    // Wake up the thread pool and perform work.
    scheduler.perform_work([&] {
        long result = fib(20);
        std::cout << "fib(20)=" << result << std::endl;
    });
    // At this point the thread pool sleeps.
    // This can for example be used for periodic work.
}

long fib(long n) {
    if (n == 0) {
        return 0;
    }
    if (n == 1) {
        return 1;
    }

    // Example for the high level API.
    // Will run both functions in parallel as seperate tasks.
    int left, right;
    pls::invoke_parallel(
            [&] { left = fib(n - 1); },
            [&] { right = fib(n - 2); }
    );
    return left + right;
}

```

76

77 78 79 80 81 82 83
## Project Structure

The project uses [CMAKE](https://cmake.org/) as it's build system,
the recommended IDE is either a simple text editor or [CLion](https://www.jetbrains.com/clion/).
We divide the project into subtargets to separate for the library
itself, testing and example code. The library itself can be found in
`lib/pls`, testing related code is in `test`, example and playground
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98
apps are in `app`.

### Buiding

To build the project first create a folder for the build
(typically as a subfolder to the project) using `mkdir cmake-build-debug`.
Change to the new folder `cd cmake-build-debug` and init the cmake
project using `cmake ../ -DCMAKE_BUILD_TYPE=DEBUG`. For realease builds
do the same only with build type `RELEASE`. Other build time settings
can also be passed at this setup step.

After this is done you can use normal `make` commands like
`make` to build everything `make <target>` to build a target
or `make install` to install the library globally.

99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114
Available Settings:
- `-DEASY_PROFILER=ON/OFF`
    - default OFF
    - Enabling will link the easy profiler library and enable its macros
    - Enabling has a performance hit (do not use in releases)
- `-DADDRESS_SANITIZER=ON/OFF`
    - default OFF
    - Enables address sanitizer to be linked to the executable
    - Only one sanitizer can be active at once
    - Enabling has a performance hit (do not use in releases)
- `-DTHREAD_SANITIZER=ON/OFF`
    - default OFF
    - Enables thread/datarace sanitizer to be linked to the executable
    - Only one sanitizer can be active at once
    - Enabling has a performance hit (do not use in releases)

115 116 117 118 119 120 121 122 123 124 125 126 127
### Testing

Testing is done using [Catch2](https://github.com/catchorg/Catch2/)
in the test subfolder. Tests are build into a target called `tests`
and can be executed simply by building this executabe and running it.

### Data Race Detection

As this project contains a lot concurrent code we use
[Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)
in our CI process and optional in other builds. To setup CMake builds
with sanitizer enabled add the cmake option `-DTHREAD_SANITIZER=ON`.
Please regularly test with thread sanitizer enabled and make sure to not
128 129 130
keep the repository in a state where the sanitizer reports errors.

Consider reading [the section on common data races](https://github.com/google/sanitizers/wiki/ThreadSanitizerPopularDataRaces)
131
to get an idea of what we try to avoid in our code.
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154

### Profiling

To make profiling portable and allow us to later analyze the logs
programaticly we use [easy_profiler](https://github.com/yse/easy_profiler)
for capturing data. To enable profiling install the library on your system
(best building it and then running `make install`) and set the
cmake option `-DEASY_PROFILER=ON`.

After that see the `invoke_parallel` example app for activating the
profiler. This will generate a trace file that can be viewed with
the `profiler_gui <output.prof>` command.

Please note that the profiler adds overhead when looking at sub millisecond
method invokations as we do and it can not replace a seperate
profiler like `gperf` or `valgrind` for detailed analysis.
We still think it makes sense to add it in as an optional feature,
as the customizable colors and fine grained events (including collection
of variables) can be used to visualize the `big picture` of
program execution. Also, we hope to use it to log 'events' like
successful and failed steals in the future, as the general idea of logging
information per thread efficiently might be helpful for further
analysis.