Package: libzenohc-dev
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 129
Depends: libzenohc (=0.10.0~rc)
Filename: ./0.10.0-rc/libzenohc-dev_0.10.0-rc_amd64.deb
Size: 26680
MD5sum: 665a5e72e8444c7db14311b326ba2715
SHA1: dc82f1533ed7d66dfd00e3f45f040751de772914
SHA256: 589b1cfb25df0a469af11e130257c82221e13c936fa9c1c21f8588a95cb2101d
SHA512: 737ce65fa1902d0ee1d6d9fd59ddd55626da346c50e7a600f7bece9a129298736a1ab9777d902951ac562423148277adfa06d5d470340be6a81c0dbf6bc5336b
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc-dev
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 129
Depends: libzenohc (=0.10.0~rc)
Filename: ./0.10.0-rc/libzenohc-dev_0.10.0-rc_arm64.deb
Size: 26700
MD5sum: bc50084b16feb7eca36ab138408406d2
SHA1: 5e350e1cb84e86eae8aadb70d23cd5abe8e0d3cf
SHA256: 6a8272d4b28ea7fe395d0919720fc76f7c4d066e539053afa91bdd5de137046e
SHA512: fc2b97472dd0bb7319a00a44a4a1eb181e3ec053cc5888934ab65b70830deb790d204607a58c98b3497d0d0348824f775e80be0388e11d072630cdebc979a437
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc-dev
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 129
Depends: libzenohc (=0.10.0~rc)
Filename: ./0.10.0-rc/libzenohc-dev_0.10.0-rc_armel.deb
Size: 26700
MD5sum: 618b8598a122025c38dfbf549f180ebe
SHA1: 1bda8f85f5dfddaea20b79a4cab78962485ddcd1
SHA256: 9529324e001dca0659a5f901dddd6e683aa616c7af8615f873e5f67397e44ac2
SHA512: 4df19fef43844597c2cee7599c4fbed93490fd7d1f8dd329b7845774bb513ad25e3371513addb902d66c85424b259f20c5cf56eb596d352ac2e1cdcbe7e5abed
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc-dev
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 129
Depends: libzenohc (=0.10.0~rc)
Filename: ./0.10.0-rc/libzenohc-dev_0.10.0-rc_armhf.deb
Size: 26684
MD5sum: 74e04a773afa8583205cffcc0091b54a
SHA1: e5a6622192cbfcbcfe1edfac2a2a6fedd9781ba3
SHA256: 478db109adb57800e3ba4f1db42bcf2e99093e7ac55a767e1bc9ab61b26310e5
SHA512: 4f3fbb232738edbac46e2df920816a0cb6ca02a7bcae095c77470e313c306a0a4e3df523090f875578c9bec980fbd709fee370efaba4ea00ea4f347305412fc8
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 13184
Depends: libc6 (>= 2.29)
Filename: ./0.10.0-rc/libzenohc_0.10.0-rc_amd64.deb
Size: 3538660
MD5sum: 6054cc8ff1f4909bc173d9dc1ffc0659
SHA1: 49f12215775be41f7d36189c727f16f8dbadaff0
SHA256: d76a681535fd4fd92090967d68f0115a04e25a3cc4e519a9552c658f74dd944a
SHA512: 1cbd71cdec69f8dcca46d8c83cd50f5403552550133ba217b70058d0391ce9e1f5fbcc9c2ea9cd800072a19253f2c3411cdb1190cc34ed9a8611607085daf717
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 11871
Depends: libc6:arm64 (>= 2.31)
Filename: ./0.10.0-rc/libzenohc_0.10.0-rc_arm64.deb
Size: 3176184
MD5sum: 80f200eadf14e184ec53c9ba66c97dae
SHA1: 7b9ca457a9ae4db095bcbff6af55f4e529291449
SHA256: 134d91c6280e0ee5262bc478a56d342c7780f33c60e3504d2745c87a0485c607
SHA512: 439feca1975f1c97cee302b0142c5f9411f53f9cf0fd24bd8781532f9bfbfa19d77fb607bce84fb7af6e7d5f43016b8d229673db736455a0e8b2d298c0800b21
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 12274
Depends: libc6:armel (>= 2.31)
Filename: ./0.10.0-rc/libzenohc_0.10.0-rc_armel.deb
Size: 3309652
MD5sum: 124ad9458c2cc0fe0a082c1cffc11c89
SHA1: 711bdbd4e53c2df650017d15115c759844767ec5
SHA256: ba4d320ac08c84ed3f5a50506f4830a4d3412ab14dd32e5abcc82f0b744e1e00
SHA512: 651f46ccf51319a314d2f95e973619b5c8112d99c9465c7e9e1399d2121887a73b036bc018ba8028d5d722eab91245e2e6fad441abd46fd5285f249a9bded0fb
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: libzenohc
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 12070
Depends: libc6:armhf (>= 2.31)
Filename: ./0.10.0-rc/libzenohc_0.10.0-rc_armhf.deb
Size: 3299692
MD5sum: 29c51fdea9832724080e66bf50221735
SHA1: 06bc84766277bcc3529fc62e91609c7c6588dc11
SHA256: 4fb714866d33403cc001fa2e4120ed3fbbbe0d0b6f0a71dea8526b01387ccb56
SHA512: 5fc4a5644c9da482da573225dedc6713bb7c32927a35da96f26f72e1fc250331244d90aa8d163312997b7fd80ac7990b02aa0e2e3e2dcda36a06a8adbcb18d77
Homepage: http://zenoh.io
Description: The Zenoh C API
.
[](https://github.com/eclipse-zenoh/zenoh-c/actions?query=workflow%3A%22CI%22)
[](https://zenoh-c.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# C API
.
This repository provides a C binding based on the main [Zenoh implementation
written in Rust](https://github.com/eclipse-zenoh/zenoh).
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
1. Make sure that [Rust](https://www.rust-lang.org) is available on your
platform.
Please check [here](https://www.rust-lang.org/tools/install) to learn how
to install it.
If you already have the Rust toolchain installed, make sure it is
up-to-date with:
```bash
$ rustup update
```
.
3. Clone the [source] with `git`:
.
```bash
git clone https://github.com/eclipse-zenoh/zenoh-c.git
```
.
[source]: https://github.com/eclipse-zenoh/zenoh-c
.
3. Build:
.
Good CMake practice is to perform build outside of source directory, leaving
source tree untouched. The examples below demonstrates this mode of building.
On the other hand VScode by default creates build directory named 'build'
inside source tree. In this case build script sligthly changes its behavior.
See more about it in section 'VScode'.
.
By default build configuration is set to `Release`, it's not necessary to add
`-DCMAKE_BUILD_TYPE=Release` option on configuration step. But if your platform
uses multi-config generator by default (this is the case on Windows), you may
need to add option `--config Release` on build step. See more in CMake
[build-configurations] documenation. Option`--config Release` is skipped in
further examples for brewity. It's actually necessary for [Visual Studio
generators] only. For [Ninja Multi-Config] the build script is able to select
`Release` as the default configuration.
.
```bash
$ mkdir -p build && cd build
$ cmake ../zenoh-c
$ cmake --build . --config Release
```
.
The generator to use is selected with option `-G`. If Ninja is installed on
your system, adding `-GNinja` to `cmake` command can greatly speed up the build
time:
.
```bash
$ cmake ../zenoh-c -GNinja
$ cmake --build .
```
.
[build-configurations]:
https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#build-configurations
[Visual Studio generators]:
https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#id14
[Ninja]: https://cmake.org/cmake/help/latest/generator/Ninja.html
[Ninja Multi-Config]:
https://cmake.org/cmake/help/latest/generator/Ninja%20Multi-Config.html
.
3. Install:
.
To install zenoh-c library into system just build target `install`. You need
root privileges to do it, as the default install location is `/usr/local`.
.
```bash
$ cmake --build . --target install
```
.
If you want to install zenoh-c libraries locally, you can set the
installation directory with `CMAKE_INSTALL_PREFIX`
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
$ cmake --build . --target install
```
.
By default only dynamic library is installed. Set
`ZENOHC_INSTALL_STATIC_LIBRARY` variable to install static library also:
.
```bash
$ cmake ../zenoh-c -DCMAKE_INSTALL_PREFIX=~/.local
-DZENOHC_INSTALL_STATIC_LIBRARY=TRUE
$ cmake --build . --target install
```
.
The result of installation is the header files in `include` directory, the
library files in `lib` directory and cmake package configuration files for
package `zenohc` in `lib/cmake` directory. The library later can be loaded with
CMake command `find_package(zenohc)`.
Link to targets `zenohc::lib` for dynamic library and `zenohc::static` for
static one in your CMakeLists.txt configuration file.
.
For `Debug` configuration the library package `zenohc_debug` is installed
side-by-side with release `zenohc` library. Suffix `d` is added to names of
library files (libzenohc**d**.so).
.
4. VScode
.
When zenoh-c project is opened in VSCode the build directory is set to
`build` inside source tree (this is default behavior of Microsoft [CMake
Tools]). The project build script detects this situation. In this case it
places build files in `target` directory and `Cargo.toml` file (which is
generated from `Cargo.toml.in`) into the root of source tree, as the rust
developers used to and as the rust build tools expects by default. This
behavior also can be explicitly enabled by setting
`ZENOHC_BUILD_IN_SOURCE_TREE` variable to `TRUE`.
.
[CMake Tools]:
https://marketplace.visualstudio.com/items?itemName=ms-vscode.cmake-tools
.
## Building the Examples
.
The examples can be built in two ways. One is to select `examples` as a build
target of zenoh-c project (assuming here that the current directory is
side-by-side with zenoh-c directory):
.
```bash
$ cmake ../zenoh-c
$ cmake --build . --target examples
```
.
You may also use `--target ` if you wish to only build a
specific example.
.
All build artifacts will be in the `target/release/examples` directory in
this case.
.
Second way is to directly build `examples` as a root project:
.
```bash
$ cmake ../zenoh-c/examples
$ cmake --build .
```
.
In this case the examples executables will be built in the current directory.
.
As a root project the `examples` project links `zenoh-c` with CMake's
[add_subdirectory] command by default. There are also other ways to link
`zenoh-c` - with [find_package] or [FetchContent]:
.
[add_subdirectory]:
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[FetchContent]: https://cmake.org/cmake/help/latest/module/FetchContent.html
.
Link with `zenoh-c` installed into default location in the system (with
[find_package]):
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
```
.
Link with `zenoh-c` installed in `~/.local` directory:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=PACKAGE
-DCMAKE_INSTALL_PREFIX=~/.local
```
.
Download specific `zenoh-c` version from git with [FetchContent]:
.
```bash
$ cmake ../zenoh-c/examples -DZENOHC_SOURCE=GIT_URL -DZENOHC_GIT_TAG=0.8.0-rc
```
.
See also `configure_include_project` function in [helpers.cmake] for more
information
.
[helpers.cmake]: cmake/helpers.cmake
.
## Running the Examples
.
### Basic Pub/Sub Example
```bash
$ ./target/release/examples/z_sub
```
.
```bash
$ ./target/release/examples/z_pub
```
.
### Queryable and Query Example
```bash
$ ./target/release/examples/z_queryable
```
.
```bash
$ ./target/release/examples/z_get
```
.
## Running the Throughput Examples
```bash
$ ./target/release/examples/z_sub_thr
```
.
```bash
$ ./target/release/examples/z_pub_thr
```
.
## API conventions
Many of the types exposed by the `zenoh-c` API are types for which destruction
is necessary. To help you spot these types, we named them with the convention
that any destructible type must start by `z_owned`.
.
For maximum performance, we try to make as few copies as possible. Sometimes,
this implies moving data that you `z_owned`. Any function that takes a
non-const pointer to a `z_owned` type will perform its destruction. To make
this pattern more obvious, we encourage you to use the `z_move` macro instead
of a simple `&` to create these pointers. Rest assured that all `z_owned` types
are double-free safe, and that you may check whether any `z_owned_X_t` typed
value is still valid by using `z_X_check(&val)`, or the `z_check(val)` macro if
you're using C11.
.
We hope this convention will help you streamline your memory-safe usage of
zenoh, as following it should make looking for leaks trivial: simply search for
paths where a value of a `z_owned` type hasn't been passed to a function using
`z_move`.
.
Functions that simply need to borrow your data will instead take values of the
associated `z_X_t` type. You may construct them using `z_X_loan(&val)` (or the
`z_loan(val)` generic macro with C11).
.
Note that some `z_X_t` typed values can be constructed without needing to
`z_borrow` their owned variants. This allows you to reduce the amount of copies
realized in your program.
.
The examples have been written with C11 in mind, using the conventions we
encourage you to follow.
.
Finally, we strongly advise that you refrain from using structure field that
starts with `_`:
* We try to maintain a common API between `zenoh-c` and
[`zenoh-pico`](https://github.com/eclipse-zenoh/zenoh-pico), such that porting
code from one to the other is, ideally, trivial. However, some types must have
distinct representations in either library, meaning that using these
representations explicitly will get you in trouble when porting.
* We reserve the right to change the memory layout of any type which has
`_`-prefixed fields, so trying to use them might cause your code to break on
updates.
.
## Logging
By default, zenoh-c enables Zenoh's logging library upon using the `z_open` or
`z_scout` functions. This behaviour can be disabled by adding
`-DDISABLE_LOGGER_AUTOINIT:bool=true` to the `cmake` configuration command. The
logger may then be manually re-enabled with the `zc_init_logger` function.
.
## Cross-Compilation
* The following alternative options have been introduced to facilitate
cross-compilation.
> :warning: **WARNING** :warning: : Perhaps aditional efforts are neccesary,
that will depend of your enviroment.
.
- `-DZENOHC_CARGO_CHANNEL=nightly|beta|stable`: refers to a specific rust
toolchain release [`rust-channels`]
https://rust-lang.github.io/rustup/concepts/channels.html
- `-DZENOHC_CARGO_FLAGS`: several optional flags can be used for compilation.
[`cargo flags`] https://doc.rust-lang.org/cargo/commands/cargo-build.html
- `-DZENOHC_CUSTOM_TARGET`: specifies a crosscompilation target. Currently rust
support several Tire-1, Tire-2 and Tire-3 targets [`targets`]
https://doc.rust-lang.org/nightly/rustc/platform-support.html. But keep in mind
that zenoh-c only have support for following targets:
`aarch64-unknown-linux-gnu`, `x86_64-unknown-linux-gnu`,
`arm-unknown-linux-gnueabi`
.
Lets put all together in an example:
Assuming you want to crosscompile for aarch64-unknown-linux-gnu.
.
1. install required packages
- `sudo apt install gcc-aarch64-linux-gnu`
2. *(Only if you use `nightly` )
- `rustup component add rust-src --toolchain nightly`
3. Compile Zenoh-C. Assume that it's in 'zenoh-c' directory. Notice that build
in this sample is performed outside of source directory
```bash
$ export RUSTFLAGS="-Clinker=aarch64-linux-gnu-gcc -Car=aarch64-linux-gnu-ar"
$ mkdir -p build && cd build
$ cmake ../zenoh-c -DZENOHC_CARGO_CHANNEL=nightly
-DZENOHC_CARGO_FLAGS="-Zbuild-std=std,panic_abort"
-DZENOHC_CUSTOM_TARGET="aarch64-unknown-linux-gnu"
-DCMAKE_INSTALL_PREFIX=../aarch64/stage
$ cmake --build . --target install
```
Additionaly you can use `RUSTFLAGS` enviroment variable for lead the
compilation.
.
-
.
.
If all goes right the building files will be located at:
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
and release files will be located at
`/path/to/zenoh-c/target/aarch64-unknown-linux-gnu/release`
.
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-c
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-c
Package: zenoh-backend-filesystem
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9943
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-filesystem_0.10.0-rc_amd64.deb
Size: 3051712
MD5sum: b9ea79c095b8b1f6a52e7624e52e9ed3
SHA1: 60addf4463b874b95d65b0d921e1b1ee7b827ff0
SHA256: 4c719db5039fa8dff75c8bc1424ad6f82a06d608b895aeefed61382244d6d5fb
SHA512: 9441d8f4baf508ab2eb44f6ffefb687052015436062661ff76996e3069d819e8f3456bfdb506d97083ac2b45db2daf7fc1a5c5d7afbd5d9c059b6b24ca2d31a8
Homepage: http://zenoh.io
Description: Backend for Zenoh using the file system
.
[](https://github.com/eclipse-zenoh/zenoh-backend-filesystem/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# File system backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on the host's file system to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_fs`**.
.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the `zenoh_backend_fs`
library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_FS_ROOT` environment variable to the directory
where you want the files to be stored (or exposed from).
If you don't declare it, the `~/.zenoh/zenoh_backend_fs` directory will be
used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storage-manager" plugin:
storage_manager: {
volumes: {
// configuration of a "fs" volume (the "zenoh_backend_fs" backend
library will be loaded at startup)
fs: {},
},
storages: {
// configuration of a "demo" storage using the "fs" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to file path
// this argument is optional.
strip_prefix: "demo/example",
volume: {
id: "fs",
// the key/values will be stored as files within this directory
(relative to ${ZENOH_BACKEND_FS_ROOT})
dir: "example"
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "fs" backend (the "zenoh_backend_fs" library will be loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/fs`
- Add the "demo" storage using the "fs" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example", volume: {id: "fs",
dir:"example"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored under ${ZENOH_BACKEND_FS_ROOT}/example
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
.
-------------------------------
## Configuration
### Extra configuration for filesystem-backed volumes
.
Volumes using the `fs` backend don't need any extra configuration at the volume
level. Any volume can use the `fs` backend by specifying the value `"fs"` for
the `backend` configuration key. A volume named `fs` will automatically be
backed by the `fs` backend if no other backend is specified.
.
-------------------------------
### Storage-level configuration for filesystem-backed volumes
.
Storages relying on a `fs` backed volume must/can specify additional
configuration specific to that volume, as shown in the example
[above](#setup-via-a-json5-configuration-file):
- `dir` (**required**, string) : The directory that will be used to store data.
.
- `read_only` (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't write any
file. `false` by default.
.
- `on_closure` (optional, string) : the strategy to use when the Storage is
removed. There are 2 options:
- `"do_nothing"`: the storage's directory remains untouched (this is the
default behaviour)
- `"delete_all"`: the storage's directory is deleted with all its content.
.
- `follow_links` (optional, boolean) : If set to `true` the storage will follow
the symbolic links. The default value is `false`.
.
- `keep_mime_types` (optional, boolean) : When replying to a GET query with a
file for which the zenoh encoding is not known, the storage guess its mime-type
according to the file extension. If the mime-type doesn't correspond to a
supported zenoh encoding, this option will drive the returned value:
- `true` (default value): a [Custom
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Custom)
is returned with the description set to the mime-type.
- `false`: a [Raw
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Raw) with
APP_OCTET_STREAM encoding is returned.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to file system
Each **storage** will map to a directory with path:
`${ZENOH_BACKEND_FS_ROOT}/`, where:
* `${ZENOH_BACKEND_FS_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_fs` will be
used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to a file within the
storage's directory where:
* the file path will be
`${ZENOH_BACKEND_FS_ROOT}//`, where
``
will be the zenoh key, stripped from the `"strip_prefix"` property
specified at storage creation.
* the content of the file will be the value written as a RawValue. I.e. the
same bytes buffer that has been
transported by zenoh. For UTF-8 compatible formats (StringUTF8, JSon,
Integer, Float...) it means the file
will be readable as a text format.
* the encoding and the timestamp of the key/value will be stored in a RocksDB
database stored in the storage directory.
.
### Behaviour on deletion
.
On deletion of a key, the corresponding file is removed. An entry with deletion
timestamp is inserted in the
RocksDB database (to avoid re-insertion of points with an older timestamp in
case of un-ordered messages).
At regular interval, a task cleans-up the RocksDB database from entries with
old timestamps that don't have a
corresponding existing file.
.
### Behaviour on GET
.
On GET operations, the storage searches for matching and existing files, and
return their raw content as a reply.
For each, the encoding and timestamp are retrieved from the RocksDB database.
But if no entry is found in the
database for a file (e.g. for files created without zenoh), the encoding is
deduced from the file's extension
(using [mime_guess](https://crates.io/crates/mime_guess)), and the timestamp is
deduced from the file's
modification time.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-filesystem/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-filesystem` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-filesystem
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Package: zenoh-backend-filesystem
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 8743
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-filesystem_0.10.0-rc_arm64.deb
Size: 2651852
MD5sum: 03bb502ea7ffebeb6adcd801a1cc08a8
SHA1: ccb7b9fcf8aeddd0df3be1dbc9d97764513d5666
SHA256: 43a30258e92e51631c278c9cfbdd5e6b5d50f8a9858084ad7e44eccf382af8ae
SHA512: de2eaaec5649b7fee05e1475694d0a728918c31325d4a6a559d4e05450793a19c70e11a80074c6b89f6d87a061387665e7792af23f215fff162b5d2b5116cc4d
Homepage: http://zenoh.io
Description: Backend for Zenoh using the file system
.
[](https://github.com/eclipse-zenoh/zenoh-backend-filesystem/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# File system backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on the host's file system to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_fs`**.
.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the `zenoh_backend_fs`
library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_FS_ROOT` environment variable to the directory
where you want the files to be stored (or exposed from).
If you don't declare it, the `~/.zenoh/zenoh_backend_fs` directory will be
used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storage-manager" plugin:
storage_manager: {
volumes: {
// configuration of a "fs" volume (the "zenoh_backend_fs" backend
library will be loaded at startup)
fs: {},
},
storages: {
// configuration of a "demo" storage using the "fs" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to file path
// this argument is optional.
strip_prefix: "demo/example",
volume: {
id: "fs",
// the key/values will be stored as files within this directory
(relative to ${ZENOH_BACKEND_FS_ROOT})
dir: "example"
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "fs" backend (the "zenoh_backend_fs" library will be loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/fs`
- Add the "demo" storage using the "fs" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example", volume: {id: "fs",
dir:"example"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored under ${ZENOH_BACKEND_FS_ROOT}/example
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
.
-------------------------------
## Configuration
### Extra configuration for filesystem-backed volumes
.
Volumes using the `fs` backend don't need any extra configuration at the volume
level. Any volume can use the `fs` backend by specifying the value `"fs"` for
the `backend` configuration key. A volume named `fs` will automatically be
backed by the `fs` backend if no other backend is specified.
.
-------------------------------
### Storage-level configuration for filesystem-backed volumes
.
Storages relying on a `fs` backed volume must/can specify additional
configuration specific to that volume, as shown in the example
[above](#setup-via-a-json5-configuration-file):
- `dir` (**required**, string) : The directory that will be used to store data.
.
- `read_only` (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't write any
file. `false` by default.
.
- `on_closure` (optional, string) : the strategy to use when the Storage is
removed. There are 2 options:
- `"do_nothing"`: the storage's directory remains untouched (this is the
default behaviour)
- `"delete_all"`: the storage's directory is deleted with all its content.
.
- `follow_links` (optional, boolean) : If set to `true` the storage will follow
the symbolic links. The default value is `false`.
.
- `keep_mime_types` (optional, boolean) : When replying to a GET query with a
file for which the zenoh encoding is not known, the storage guess its mime-type
according to the file extension. If the mime-type doesn't correspond to a
supported zenoh encoding, this option will drive the returned value:
- `true` (default value): a [Custom
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Custom)
is returned with the description set to the mime-type.
- `false`: a [Raw
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Raw) with
APP_OCTET_STREAM encoding is returned.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to file system
Each **storage** will map to a directory with path:
`${ZENOH_BACKEND_FS_ROOT}/`, where:
* `${ZENOH_BACKEND_FS_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_fs` will be
used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to a file within the
storage's directory where:
* the file path will be
`${ZENOH_BACKEND_FS_ROOT}//`, where
``
will be the zenoh key, stripped from the `"strip_prefix"` property
specified at storage creation.
* the content of the file will be the value written as a RawValue. I.e. the
same bytes buffer that has been
transported by zenoh. For UTF-8 compatible formats (StringUTF8, JSon,
Integer, Float...) it means the file
will be readable as a text format.
* the encoding and the timestamp of the key/value will be stored in a RocksDB
database stored in the storage directory.
.
### Behaviour on deletion
.
On deletion of a key, the corresponding file is removed. An entry with deletion
timestamp is inserted in the
RocksDB database (to avoid re-insertion of points with an older timestamp in
case of un-ordered messages).
At regular interval, a task cleans-up the RocksDB database from entries with
old timestamps that don't have a
corresponding existing file.
.
### Behaviour on GET
.
On GET operations, the storage searches for matching and existing files, and
return their raw content as a reply.
For each, the encoding and timestamp are retrieved from the RocksDB database.
But if no entry is found in the
database for a file (e.g. for files created without zenoh), the encoding is
deduced from the file's extension
(using [mime_guess](https://crates.io/crates/mime_guess)), and the timestamp is
deduced from the file's
modification time.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-filesystem/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-filesystem` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-filesystem
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Package: zenoh-backend-filesystem
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 8326
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-filesystem_0.10.0-rc_armel.deb
Size: 2594748
MD5sum: 84648e53008ee43dd4d287248cc673b6
SHA1: d01212b13363582181417eb96083e8bd1dc2250b
SHA256: e8b8a5cd00b9055cba85ed3f54729e4ac07ce46ec8870d60f2a84d32c055f031
SHA512: a790330c8ee5ca4c0fa45249661f4de2a4bbccfb1e80b5a30d98f81a9eed1adae0b6c72f726e9fce398f92fae68d532d0954c3b697a802229b54815b274d7267
Homepage: http://zenoh.io
Description: Backend for Zenoh using the file system
.
[](https://github.com/eclipse-zenoh/zenoh-backend-filesystem/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# File system backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on the host's file system to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_fs`**.
.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the `zenoh_backend_fs`
library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_FS_ROOT` environment variable to the directory
where you want the files to be stored (or exposed from).
If you don't declare it, the `~/.zenoh/zenoh_backend_fs` directory will be
used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storage-manager" plugin:
storage_manager: {
volumes: {
// configuration of a "fs" volume (the "zenoh_backend_fs" backend
library will be loaded at startup)
fs: {},
},
storages: {
// configuration of a "demo" storage using the "fs" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to file path
// this argument is optional.
strip_prefix: "demo/example",
volume: {
id: "fs",
// the key/values will be stored as files within this directory
(relative to ${ZENOH_BACKEND_FS_ROOT})
dir: "example"
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "fs" backend (the "zenoh_backend_fs" library will be loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/fs`
- Add the "demo" storage using the "fs" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example", volume: {id: "fs",
dir:"example"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored under ${ZENOH_BACKEND_FS_ROOT}/example
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
.
-------------------------------
## Configuration
### Extra configuration for filesystem-backed volumes
.
Volumes using the `fs` backend don't need any extra configuration at the volume
level. Any volume can use the `fs` backend by specifying the value `"fs"` for
the `backend` configuration key. A volume named `fs` will automatically be
backed by the `fs` backend if no other backend is specified.
.
-------------------------------
### Storage-level configuration for filesystem-backed volumes
.
Storages relying on a `fs` backed volume must/can specify additional
configuration specific to that volume, as shown in the example
[above](#setup-via-a-json5-configuration-file):
- `dir` (**required**, string) : The directory that will be used to store data.
.
- `read_only` (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't write any
file. `false` by default.
.
- `on_closure` (optional, string) : the strategy to use when the Storage is
removed. There are 2 options:
- `"do_nothing"`: the storage's directory remains untouched (this is the
default behaviour)
- `"delete_all"`: the storage's directory is deleted with all its content.
.
- `follow_links` (optional, boolean) : If set to `true` the storage will follow
the symbolic links. The default value is `false`.
.
- `keep_mime_types` (optional, boolean) : When replying to a GET query with a
file for which the zenoh encoding is not known, the storage guess its mime-type
according to the file extension. If the mime-type doesn't correspond to a
supported zenoh encoding, this option will drive the returned value:
- `true` (default value): a [Custom
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Custom)
is returned with the description set to the mime-type.
- `false`: a [Raw
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Raw) with
APP_OCTET_STREAM encoding is returned.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to file system
Each **storage** will map to a directory with path:
`${ZENOH_BACKEND_FS_ROOT}/`, where:
* `${ZENOH_BACKEND_FS_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_fs` will be
used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to a file within the
storage's directory where:
* the file path will be
`${ZENOH_BACKEND_FS_ROOT}//`, where
``
will be the zenoh key, stripped from the `"strip_prefix"` property
specified at storage creation.
* the content of the file will be the value written as a RawValue. I.e. the
same bytes buffer that has been
transported by zenoh. For UTF-8 compatible formats (StringUTF8, JSon,
Integer, Float...) it means the file
will be readable as a text format.
* the encoding and the timestamp of the key/value will be stored in a RocksDB
database stored in the storage directory.
.
### Behaviour on deletion
.
On deletion of a key, the corresponding file is removed. An entry with deletion
timestamp is inserted in the
RocksDB database (to avoid re-insertion of points with an older timestamp in
case of un-ordered messages).
At regular interval, a task cleans-up the RocksDB database from entries with
old timestamps that don't have a
corresponding existing file.
.
### Behaviour on GET
.
On GET operations, the storage searches for matching and existing files, and
return their raw content as a reply.
For each, the encoding and timestamp are retrieved from the RocksDB database.
But if no entry is found in the
database for a file (e.g. for files created without zenoh), the encoding is
deduced from the file's extension
(using [mime_guess](https://crates.io/crates/mime_guess)), and the timestamp is
deduced from the file's
modification time.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-filesystem/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-filesystem` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-filesystem
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Package: zenoh-backend-filesystem
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 6554
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-filesystem_0.10.0-rc_armhf.deb
Size: 2695500
MD5sum: 6ab639279a0396cf487f92b5e1d5da5f
SHA1: e896c7824fd13811e1ab91677ce31535544ab76f
SHA256: 3e34f4674f1474f5b130eed939cd8266f066857eeb0419e8e4c5a7a65d057b7b
SHA512: af16a35c379fd0aa14780a046aff74851b0875e8bd7f3a97ca175aafa91cc77d9ec29144e896076eac8b7600eaf375b21edc1dc296d289775b1b13fb87fa6b2c
Homepage: http://zenoh.io
Description: Backend for Zenoh using the file system
.
[](https://github.com/eclipse-zenoh/zenoh-backend-filesystem/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# File system backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on the host's file system to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_fs`**.
.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the `zenoh_backend_fs`
library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_FS_ROOT` environment variable to the directory
where you want the files to be stored (or exposed from).
If you don't declare it, the `~/.zenoh/zenoh_backend_fs` directory will be
used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storage-manager" plugin:
storage_manager: {
volumes: {
// configuration of a "fs" volume (the "zenoh_backend_fs" backend
library will be loaded at startup)
fs: {},
},
storages: {
// configuration of a "demo" storage using the "fs" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to file path
// this argument is optional.
strip_prefix: "demo/example",
volume: {
id: "fs",
// the key/values will be stored as files within this directory
(relative to ${ZENOH_BACKEND_FS_ROOT})
dir: "example"
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "fs" backend (the "zenoh_backend_fs" library will be loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/fs`
- Add the "demo" storage using the "fs" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example", volume: {id: "fs",
dir:"example"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored under ${ZENOH_BACKEND_FS_ROOT}/example
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
.
-------------------------------
## Configuration
### Extra configuration for filesystem-backed volumes
.
Volumes using the `fs` backend don't need any extra configuration at the volume
level. Any volume can use the `fs` backend by specifying the value `"fs"` for
the `backend` configuration key. A volume named `fs` will automatically be
backed by the `fs` backend if no other backend is specified.
.
-------------------------------
### Storage-level configuration for filesystem-backed volumes
.
Storages relying on a `fs` backed volume must/can specify additional
configuration specific to that volume, as shown in the example
[above](#setup-via-a-json5-configuration-file):
- `dir` (**required**, string) : The directory that will be used to store data.
.
- `read_only` (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't write any
file. `false` by default.
.
- `on_closure` (optional, string) : the strategy to use when the Storage is
removed. There are 2 options:
- `"do_nothing"`: the storage's directory remains untouched (this is the
default behaviour)
- `"delete_all"`: the storage's directory is deleted with all its content.
.
- `follow_links` (optional, boolean) : If set to `true` the storage will follow
the symbolic links. The default value is `false`.
.
- `keep_mime_types` (optional, boolean) : When replying to a GET query with a
file for which the zenoh encoding is not known, the storage guess its mime-type
according to the file extension. If the mime-type doesn't correspond to a
supported zenoh encoding, this option will drive the returned value:
- `true` (default value): a [Custom
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Custom)
is returned with the description set to the mime-type.
- `false`: a [Raw
value](https://docs.rs/zenoh/latest/zenoh/enum.Value.html#variant.Raw) with
APP_OCTET_STREAM encoding is returned.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to file system
Each **storage** will map to a directory with path:
`${ZENOH_BACKEND_FS_ROOT}/`, where:
* `${ZENOH_BACKEND_FS_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_fs` will be
used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to a file within the
storage's directory where:
* the file path will be
`${ZENOH_BACKEND_FS_ROOT}//`, where
``
will be the zenoh key, stripped from the `"strip_prefix"` property
specified at storage creation.
* the content of the file will be the value written as a RawValue. I.e. the
same bytes buffer that has been
transported by zenoh. For UTF-8 compatible formats (StringUTF8, JSon,
Integer, Float...) it means the file
will be readable as a text format.
* the encoding and the timestamp of the key/value will be stored in a RocksDB
database stored in the storage directory.
.
### Behaviour on deletion
.
On deletion of a key, the corresponding file is removed. An entry with deletion
timestamp is inserted in the
RocksDB database (to avoid re-insertion of points with an older timestamp in
case of un-ordered messages).
At regular interval, a task cleans-up the RocksDB database from entries with
old timestamps that don't have a
corresponding existing file.
.
### Behaviour on GET
.
On GET operations, the storage searches for matching and existing files, and
return their raw content as a reply.
For each, the encoding and timestamp are retrieved from the RocksDB database.
But if no entry is found in the
database for a file (e.g. for files created without zenoh), the encoding is
deduced from the file's extension
(using [mime_guess](https://crates.io/crates/mime_guess)), and the timestamp is
deduced from the file's
modification time.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-filesystem/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-filesystem` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-filesystem
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-filesystem
Package: zenoh-backend-influxdb
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 5337
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-influxdb_0.10.0-rc_amd64.deb
Size: 1671028
MD5sum: 71259d952276fb71ebc92186fa2c7eec
SHA1: 2991431a9e96b5195e6f76cb446c4f05708e3f81
SHA256: 6433ba00de5315635b29d68634e3fc08e807dc25a65e4c8f8eaaade326e0c35a
SHA512: 902c9bfb9fe4e4771b77191f9af80422a628b52f5513953956290c55ad811fb88f503db7fc19d79bb46038fdb7547e4c57bbba2e176605debf1ff07895dbb17d
Homepage: http://zenoh.io
Description: Backend for Zenoh using InfluxDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# InfluxDB backend
.
In Zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on an
[InfluxDB](https://www.influxdata.com/products/influxdb/) server
to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_influxdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
:warning: InfluxDB 2.x is not yet supported. InfluxDB 1.8 minimum is required.
.
-------------------------------
## :warning: Documentation for previous 0.5 versions:
The following documentation related to the version currently in development in
"master" branch: 0.6.x.
.
For previous versions see the README and code of the corresponding tagged
version:
-
[0.5.0-beta.9](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.9#readme)
-
[0.5.0-beta.8](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.8#readme)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_influxdb` library file is available in `~/.zenoh/lib`.
- You have an InfluxDB service running and listening on
`http://localhost:8086`
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing for example:
```json5
{
plugins: {
// configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
// configuration of a "influxdb" volume (the
"zenoh_backend_influxdb" backend library will be loaded at startup)
influxdb: {
// URL to the InfluxDB service
url: "http://localhost:8086",
private: {
// If needed: InfluxDB credentials, preferably admin for
databases creation and drop
//username: "admin",
//password: "password"
}
}
},
storages: {
// configuration of a "demo" storage using the "influxdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
// this option is optional
strip_prefix: "demo/example",
volume: {
id: "influxdb",
// the database name within InfluxDB
db: "zenoh_example",
// if the database doesn't exist, create it
create_db: true,
// strategy on storage closure
on_closure: "drop_db",
private: {
// If needed: InfluxDB credentials, with read/write
privileges for the database
//username: "user",
//password: "password"
}
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "influxdb" volume (the "zenoh_backend_fs" library will be loaded),
connected to InfluxDB service on http://localhost:8086:
`curl -X PUT -H 'content-type:application/json' -d
'{url:"http://localhost:8086"}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/influxdb`
- Add the "demo" storage using the "influxdb" volume:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",volume:{id:"influxdb",db:"zenoh_example",create_db:true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put some values at different time intervals
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-2" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-3" http://localhost:8000/demo/example/test
.
# Retrive them as a time serie where '_time=[..]' means "infinite time range"
curl -g 'http://localhost:8000/demo/example/test?_time=[..]'
```
.
.
-------------------------------
## Volume configuration
InfluxDB-backed volumes need some configuration to work:
.
- **`"url"`** (**required**) : a URL to the InfluxDB service. Example:
`http://localhost:8086`
.
- **`"username"`** (optional) : an [InfluxDB
admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#admin-users)
user name. It will be used for creation of databases, granting read/write
privileges of databases mapped to storages and dropping of databases and
measurements.
.
- **`"password"`** (optional) : the admin user's password.
.
Both `username` and `password` should be hidden behind a `private` gate, as
shown in the example [above](#setup-via-a-json5-configuration-file). In
general, if you wish for a part of the configuration to be hidden when
configuration is queried, you should hide it behind a `private` gate.
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a `influxdb` backed volume may have additional
configuration through the `volume` section:
- **`"db"`** (optional, string) : the InfluxDB database name the storage will
map into. If not specified, a random name will be generated, and the
corresponding database will be created (even if `"create_db"` is not set).
.
- **`"create_db"`** (optional, boolean) : create the InfluxDB database if not
already existing.
By default the database is not created, unless `"db"` property is not
specified.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 3 options:
- *unset* or `"do_nothing"`: the database remains untouched (this is the
default behaviour)
- `"drop_db"`: the database is dropped (i.e. removed)
- `"drop_series"`: all the series (measurements) are dropped and the database
remains empty.
.
- **`"username"`** (optional, string) : an InfluxDB user name (usually
[non-admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#non-admin-users)).
It will be used to read/write points in the database on GET/PUT/DELETE zenoh
operations.
.
- **`"password"`** (optional, string) : the user's password.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to InfluxDB concepts
Each **storage** will map to an InfluxDB **database**.
Each **key** to store will map to an InfluxDB
[**measurement**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#measurement)
named with the key stripped from the `"strip_prefix"` property (see below).
Each **key/value** put into the storage will map to an InfluxDB
[**point**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#point)
reusing the timestamp set by zenoh
(but with a precision of nanoseconds). The fileds and tags of the point is are
the following:
- `"kind"` tag: the zenoh change kind (`"PUT"` for a value that have been put,
or `"DEL"` to mark the deletion of the key)
- `"timestamp"` field: the original zenoh timestamp
- `"encoding"` field: the value's encoding flag
- `"base64"` field: a boolean indicating if the value is encoded in base64
- `"value"`field: the value as a string, possibly encoded in base64 for binary
values.
.
### Behaviour on deletion
On deletion of a key, all points with a timestamp before the deletion message
are deleted.
A point with `"kind"="DEL`" is inserted (to avoid re-insertion of points with
an older timestamp in case of un-ordered messages).
After a delay (5 seconds), the measurement corresponding to the deleted key is
dropped if it still contains no points.
.
### Behaviour on GET
On GET operations, by default the storage returns only the latest point for
each key/measurement.
This is to be coherent with other backends technologies that only store 1 value
per-key.
If you want to get time-series as a result of a GET operation, you need to
specify a time range via
the `"_time"`argument in your
[Selector](https://github.com/eclipse-zenoh/roadmap/tree/main/rfcs/ALL/Selectors).
.
Examples of selectors:
```bash
# get the complete time-series
/demo/example/**?_time=[..]
.
# get points within a fixed date interval
/demo/example/influxdb/**?_time=[2020-01-01T00:00:00Z..2020-01-02T12:00:00.000000000Z]
.
# get points within a relative date interval
/demo/example/influxdb/**?_time=[now(-2d)..now(-1d)]
```
.
See the [`"_time"`
RFC](https://github.com/eclipse-zenoh/roadmap/blob/main/rfcs/ALL/Selectors/_time.md)
for a complete description of the time range format
.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-influxdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-influxdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-influxdb
```
.
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Package: zenoh-backend-influxdb
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 4801
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-influxdb_0.10.0-rc_arm64.deb
Size: 1489788
MD5sum: 402cfbb5a497ed25e684f915a6daf4e9
SHA1: 56689b08d0707168779d44f41148bd3c94749105
SHA256: 3b520c402ed04e4a526ea1c4760a88d1583202a0f83f5303880a19d0b2c2a13b
SHA512: b923406e4049c3251fcb197eb9ce2ff1b70b43d428d7549756d459cd01fcb34cb2026be9ab1ed3ec44e52a61eca4fd3b60fcd80568f6d5b6714dd60f26ebfffd
Homepage: http://zenoh.io
Description: Backend for Zenoh using InfluxDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# InfluxDB backend
.
In Zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on an
[InfluxDB](https://www.influxdata.com/products/influxdb/) server
to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_influxdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
:warning: InfluxDB 2.x is not yet supported. InfluxDB 1.8 minimum is required.
.
-------------------------------
## :warning: Documentation for previous 0.5 versions:
The following documentation related to the version currently in development in
"master" branch: 0.6.x.
.
For previous versions see the README and code of the corresponding tagged
version:
-
[0.5.0-beta.9](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.9#readme)
-
[0.5.0-beta.8](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.8#readme)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_influxdb` library file is available in `~/.zenoh/lib`.
- You have an InfluxDB service running and listening on
`http://localhost:8086`
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing for example:
```json5
{
plugins: {
// configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
// configuration of a "influxdb" volume (the
"zenoh_backend_influxdb" backend library will be loaded at startup)
influxdb: {
// URL to the InfluxDB service
url: "http://localhost:8086",
private: {
// If needed: InfluxDB credentials, preferably admin for
databases creation and drop
//username: "admin",
//password: "password"
}
}
},
storages: {
// configuration of a "demo" storage using the "influxdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
// this option is optional
strip_prefix: "demo/example",
volume: {
id: "influxdb",
// the database name within InfluxDB
db: "zenoh_example",
// if the database doesn't exist, create it
create_db: true,
// strategy on storage closure
on_closure: "drop_db",
private: {
// If needed: InfluxDB credentials, with read/write
privileges for the database
//username: "user",
//password: "password"
}
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "influxdb" volume (the "zenoh_backend_fs" library will be loaded),
connected to InfluxDB service on http://localhost:8086:
`curl -X PUT -H 'content-type:application/json' -d
'{url:"http://localhost:8086"}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/influxdb`
- Add the "demo" storage using the "influxdb" volume:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",volume:{id:"influxdb",db:"zenoh_example",create_db:true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put some values at different time intervals
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-2" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-3" http://localhost:8000/demo/example/test
.
# Retrive them as a time serie where '_time=[..]' means "infinite time range"
curl -g 'http://localhost:8000/demo/example/test?_time=[..]'
```
.
.
-------------------------------
## Volume configuration
InfluxDB-backed volumes need some configuration to work:
.
- **`"url"`** (**required**) : a URL to the InfluxDB service. Example:
`http://localhost:8086`
.
- **`"username"`** (optional) : an [InfluxDB
admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#admin-users)
user name. It will be used for creation of databases, granting read/write
privileges of databases mapped to storages and dropping of databases and
measurements.
.
- **`"password"`** (optional) : the admin user's password.
.
Both `username` and `password` should be hidden behind a `private` gate, as
shown in the example [above](#setup-via-a-json5-configuration-file). In
general, if you wish for a part of the configuration to be hidden when
configuration is queried, you should hide it behind a `private` gate.
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a `influxdb` backed volume may have additional
configuration through the `volume` section:
- **`"db"`** (optional, string) : the InfluxDB database name the storage will
map into. If not specified, a random name will be generated, and the
corresponding database will be created (even if `"create_db"` is not set).
.
- **`"create_db"`** (optional, boolean) : create the InfluxDB database if not
already existing.
By default the database is not created, unless `"db"` property is not
specified.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 3 options:
- *unset* or `"do_nothing"`: the database remains untouched (this is the
default behaviour)
- `"drop_db"`: the database is dropped (i.e. removed)
- `"drop_series"`: all the series (measurements) are dropped and the database
remains empty.
.
- **`"username"`** (optional, string) : an InfluxDB user name (usually
[non-admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#non-admin-users)).
It will be used to read/write points in the database on GET/PUT/DELETE zenoh
operations.
.
- **`"password"`** (optional, string) : the user's password.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to InfluxDB concepts
Each **storage** will map to an InfluxDB **database**.
Each **key** to store will map to an InfluxDB
[**measurement**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#measurement)
named with the key stripped from the `"strip_prefix"` property (see below).
Each **key/value** put into the storage will map to an InfluxDB
[**point**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#point)
reusing the timestamp set by zenoh
(but with a precision of nanoseconds). The fileds and tags of the point is are
the following:
- `"kind"` tag: the zenoh change kind (`"PUT"` for a value that have been put,
or `"DEL"` to mark the deletion of the key)
- `"timestamp"` field: the original zenoh timestamp
- `"encoding"` field: the value's encoding flag
- `"base64"` field: a boolean indicating if the value is encoded in base64
- `"value"`field: the value as a string, possibly encoded in base64 for binary
values.
.
### Behaviour on deletion
On deletion of a key, all points with a timestamp before the deletion message
are deleted.
A point with `"kind"="DEL`" is inserted (to avoid re-insertion of points with
an older timestamp in case of un-ordered messages).
After a delay (5 seconds), the measurement corresponding to the deleted key is
dropped if it still contains no points.
.
### Behaviour on GET
On GET operations, by default the storage returns only the latest point for
each key/measurement.
This is to be coherent with other backends technologies that only store 1 value
per-key.
If you want to get time-series as a result of a GET operation, you need to
specify a time range via
the `"_time"`argument in your
[Selector](https://github.com/eclipse-zenoh/roadmap/tree/main/rfcs/ALL/Selectors).
.
Examples of selectors:
```bash
# get the complete time-series
/demo/example/**?_time=[..]
.
# get points within a fixed date interval
/demo/example/influxdb/**?_time=[2020-01-01T00:00:00Z..2020-01-02T12:00:00.000000000Z]
.
# get points within a relative date interval
/demo/example/influxdb/**?_time=[now(-2d)..now(-1d)]
```
.
See the [`"_time"`
RFC](https://github.com/eclipse-zenoh/roadmap/blob/main/rfcs/ALL/Selectors/_time.md)
for a complete description of the time range format
.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-influxdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-influxdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-influxdb
```
.
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Package: zenoh-backend-influxdb
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 4260
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-influxdb_0.10.0-rc_armel.deb
Size: 1288424
MD5sum: 300c94481d63e1951813183f03aaa841
SHA1: 7781bc36b66c6be1a61493f141e4636339d15163
SHA256: b7de9536083ee1ab4c898c304a5069ebfb6d8305cb308ecb4c4b3d215cdc7bd8
SHA512: d7108c5c9589d971b4d2b83390977e94b2a2d6be03a6a3df02703ce0d3fc8fd148dafa924c59e41a704157e9948049c6194fb86ebc903511aa46dacbc542dd78
Homepage: http://zenoh.io
Description: Backend for Zenoh using InfluxDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# InfluxDB backend
.
In Zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on an
[InfluxDB](https://www.influxdata.com/products/influxdb/) server
to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_influxdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
:warning: InfluxDB 2.x is not yet supported. InfluxDB 1.8 minimum is required.
.
-------------------------------
## :warning: Documentation for previous 0.5 versions:
The following documentation related to the version currently in development in
"master" branch: 0.6.x.
.
For previous versions see the README and code of the corresponding tagged
version:
-
[0.5.0-beta.9](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.9#readme)
-
[0.5.0-beta.8](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.8#readme)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_influxdb` library file is available in `~/.zenoh/lib`.
- You have an InfluxDB service running and listening on
`http://localhost:8086`
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing for example:
```json5
{
plugins: {
// configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
// configuration of a "influxdb" volume (the
"zenoh_backend_influxdb" backend library will be loaded at startup)
influxdb: {
// URL to the InfluxDB service
url: "http://localhost:8086",
private: {
// If needed: InfluxDB credentials, preferably admin for
databases creation and drop
//username: "admin",
//password: "password"
}
}
},
storages: {
// configuration of a "demo" storage using the "influxdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
// this option is optional
strip_prefix: "demo/example",
volume: {
id: "influxdb",
// the database name within InfluxDB
db: "zenoh_example",
// if the database doesn't exist, create it
create_db: true,
// strategy on storage closure
on_closure: "drop_db",
private: {
// If needed: InfluxDB credentials, with read/write
privileges for the database
//username: "user",
//password: "password"
}
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "influxdb" volume (the "zenoh_backend_fs" library will be loaded),
connected to InfluxDB service on http://localhost:8086:
`curl -X PUT -H 'content-type:application/json' -d
'{url:"http://localhost:8086"}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/influxdb`
- Add the "demo" storage using the "influxdb" volume:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",volume:{id:"influxdb",db:"zenoh_example",create_db:true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put some values at different time intervals
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-2" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-3" http://localhost:8000/demo/example/test
.
# Retrive them as a time serie where '_time=[..]' means "infinite time range"
curl -g 'http://localhost:8000/demo/example/test?_time=[..]'
```
.
.
-------------------------------
## Volume configuration
InfluxDB-backed volumes need some configuration to work:
.
- **`"url"`** (**required**) : a URL to the InfluxDB service. Example:
`http://localhost:8086`
.
- **`"username"`** (optional) : an [InfluxDB
admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#admin-users)
user name. It will be used for creation of databases, granting read/write
privileges of databases mapped to storages and dropping of databases and
measurements.
.
- **`"password"`** (optional) : the admin user's password.
.
Both `username` and `password` should be hidden behind a `private` gate, as
shown in the example [above](#setup-via-a-json5-configuration-file). In
general, if you wish for a part of the configuration to be hidden when
configuration is queried, you should hide it behind a `private` gate.
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a `influxdb` backed volume may have additional
configuration through the `volume` section:
- **`"db"`** (optional, string) : the InfluxDB database name the storage will
map into. If not specified, a random name will be generated, and the
corresponding database will be created (even if `"create_db"` is not set).
.
- **`"create_db"`** (optional, boolean) : create the InfluxDB database if not
already existing.
By default the database is not created, unless `"db"` property is not
specified.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 3 options:
- *unset* or `"do_nothing"`: the database remains untouched (this is the
default behaviour)
- `"drop_db"`: the database is dropped (i.e. removed)
- `"drop_series"`: all the series (measurements) are dropped and the database
remains empty.
.
- **`"username"`** (optional, string) : an InfluxDB user name (usually
[non-admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#non-admin-users)).
It will be used to read/write points in the database on GET/PUT/DELETE zenoh
operations.
.
- **`"password"`** (optional, string) : the user's password.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to InfluxDB concepts
Each **storage** will map to an InfluxDB **database**.
Each **key** to store will map to an InfluxDB
[**measurement**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#measurement)
named with the key stripped from the `"strip_prefix"` property (see below).
Each **key/value** put into the storage will map to an InfluxDB
[**point**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#point)
reusing the timestamp set by zenoh
(but with a precision of nanoseconds). The fileds and tags of the point is are
the following:
- `"kind"` tag: the zenoh change kind (`"PUT"` for a value that have been put,
or `"DEL"` to mark the deletion of the key)
- `"timestamp"` field: the original zenoh timestamp
- `"encoding"` field: the value's encoding flag
- `"base64"` field: a boolean indicating if the value is encoded in base64
- `"value"`field: the value as a string, possibly encoded in base64 for binary
values.
.
### Behaviour on deletion
On deletion of a key, all points with a timestamp before the deletion message
are deleted.
A point with `"kind"="DEL`" is inserted (to avoid re-insertion of points with
an older timestamp in case of un-ordered messages).
After a delay (5 seconds), the measurement corresponding to the deleted key is
dropped if it still contains no points.
.
### Behaviour on GET
On GET operations, by default the storage returns only the latest point for
each key/measurement.
This is to be coherent with other backends technologies that only store 1 value
per-key.
If you want to get time-series as a result of a GET operation, you need to
specify a time range via
the `"_time"`argument in your
[Selector](https://github.com/eclipse-zenoh/roadmap/tree/main/rfcs/ALL/Selectors).
.
Examples of selectors:
```bash
# get the complete time-series
/demo/example/**?_time=[..]
.
# get points within a fixed date interval
/demo/example/influxdb/**?_time=[2020-01-01T00:00:00Z..2020-01-02T12:00:00.000000000Z]
.
# get points within a relative date interval
/demo/example/influxdb/**?_time=[now(-2d)..now(-1d)]
```
.
See the [`"_time"`
RFC](https://github.com/eclipse-zenoh/roadmap/blob/main/rfcs/ALL/Selectors/_time.md)
for a complete description of the time range format
.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-influxdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-influxdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-influxdb
```
.
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Package: zenoh-backend-influxdb
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 4156
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-influxdb_0.10.0-rc_armhf.deb
Size: 1280768
MD5sum: 76dd25575a3151221b6edf18663f23a9
SHA1: 8eb85bafe8705b718487d44680fdef38c8a03515
SHA256: f2a89e4918e3bebf07f9d13e1e6100437d8e6b717a33a577d349c658129c7ce1
SHA512: 4d8fe614ef4a7fee26b4ff860a348098b9186bf33b47ecedf2956640b570acb9cb00250b22d3b009771d43b9104515b55dd7155b6711da8147fcb428787463ac
Homepage: http://zenoh.io
Description: Backend for Zenoh using InfluxDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# InfluxDB backend
.
In Zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on an
[InfluxDB](https://www.influxdata.com/products/influxdb/) server
to implement the storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_influxdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
:warning: InfluxDB 2.x is not yet supported. InfluxDB 1.8 minimum is required.
.
-------------------------------
## :warning: Documentation for previous 0.5 versions:
The following documentation related to the version currently in development in
"master" branch: 0.6.x.
.
For previous versions see the README and code of the corresponding tagged
version:
-
[0.5.0-beta.9](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.9#readme)
-
[0.5.0-beta.8](https://github.com/eclipse-zenoh/zenoh-backend-influxdb/tree/0.5.0-beta.8#readme)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_influxdb` library file is available in `~/.zenoh/lib`.
- You have an InfluxDB service running and listening on
`http://localhost:8086`
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing for example:
```json5
{
plugins: {
// configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
// configuration of a "influxdb" volume (the
"zenoh_backend_influxdb" backend library will be loaded at startup)
influxdb: {
// URL to the InfluxDB service
url: "http://localhost:8086",
private: {
// If needed: InfluxDB credentials, preferably admin for
databases creation and drop
//username: "admin",
//password: "password"
}
}
},
storages: {
// configuration of a "demo" storage using the "influxdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
// this option is optional
strip_prefix: "demo/example",
volume: {
id: "influxdb",
// the database name within InfluxDB
db: "zenoh_example",
// if the database doesn't exist, create it
create_db: true,
// strategy on storage closure
on_closure: "drop_db",
private: {
// If needed: InfluxDB credentials, with read/write
privileges for the database
//username: "user",
//password: "password"
}
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "influxdb" volume (the "zenoh_backend_fs" library will be loaded),
connected to InfluxDB service on http://localhost:8086:
`curl -X PUT -H 'content-type:application/json' -d
'{url:"http://localhost:8086"}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/influxdb`
- Add the "demo" storage using the "influxdb" volume:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",volume:{id:"influxdb",db:"zenoh_example",create_db:true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put some values at different time intervals
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-2" http://localhost:8000/demo/example/test
curl -X PUT -d "TEST-3" http://localhost:8000/demo/example/test
.
# Retrive them as a time serie where '_time=[..]' means "infinite time range"
curl -g 'http://localhost:8000/demo/example/test?_time=[..]'
```
.
.
-------------------------------
## Volume configuration
InfluxDB-backed volumes need some configuration to work:
.
- **`"url"`** (**required**) : a URL to the InfluxDB service. Example:
`http://localhost:8086`
.
- **`"username"`** (optional) : an [InfluxDB
admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#admin-users)
user name. It will be used for creation of databases, granting read/write
privileges of databases mapped to storages and dropping of databases and
measurements.
.
- **`"password"`** (optional) : the admin user's password.
.
Both `username` and `password` should be hidden behind a `private` gate, as
shown in the example [above](#setup-via-a-json5-configuration-file). In
general, if you wish for a part of the configuration to be hidden when
configuration is queried, you should hide it behind a `private` gate.
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a `influxdb` backed volume may have additional
configuration through the `volume` section:
- **`"db"`** (optional, string) : the InfluxDB database name the storage will
map into. If not specified, a random name will be generated, and the
corresponding database will be created (even if `"create_db"` is not set).
.
- **`"create_db"`** (optional, boolean) : create the InfluxDB database if not
already existing.
By default the database is not created, unless `"db"` property is not
specified.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 3 options:
- *unset* or `"do_nothing"`: the database remains untouched (this is the
default behaviour)
- `"drop_db"`: the database is dropped (i.e. removed)
- `"drop_series"`: all the series (measurements) are dropped and the database
remains empty.
.
- **`"username"`** (optional, string) : an InfluxDB user name (usually
[non-admin](https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#non-admin-users)).
It will be used to read/write points in the database on GET/PUT/DELETE zenoh
operations.
.
- **`"password"`** (optional, string) : the user's password.
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to InfluxDB concepts
Each **storage** will map to an InfluxDB **database**.
Each **key** to store will map to an InfluxDB
[**measurement**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#measurement)
named with the key stripped from the `"strip_prefix"` property (see below).
Each **key/value** put into the storage will map to an InfluxDB
[**point**](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#point)
reusing the timestamp set by zenoh
(but with a precision of nanoseconds). The fileds and tags of the point is are
the following:
- `"kind"` tag: the zenoh change kind (`"PUT"` for a value that have been put,
or `"DEL"` to mark the deletion of the key)
- `"timestamp"` field: the original zenoh timestamp
- `"encoding"` field: the value's encoding flag
- `"base64"` field: a boolean indicating if the value is encoded in base64
- `"value"`field: the value as a string, possibly encoded in base64 for binary
values.
.
### Behaviour on deletion
On deletion of a key, all points with a timestamp before the deletion message
are deleted.
A point with `"kind"="DEL`" is inserted (to avoid re-insertion of points with
an older timestamp in case of un-ordered messages).
After a delay (5 seconds), the measurement corresponding to the deleted key is
dropped if it still contains no points.
.
### Behaviour on GET
On GET operations, by default the storage returns only the latest point for
each key/measurement.
This is to be coherent with other backends technologies that only store 1 value
per-key.
If you want to get time-series as a result of a GET operation, you need to
specify a time range via
the `"_time"`argument in your
[Selector](https://github.com/eclipse-zenoh/roadmap/tree/main/rfcs/ALL/Selectors).
.
Examples of selectors:
```bash
# get the complete time-series
/demo/example/**?_time=[..]
.
# get points within a fixed date interval
/demo/example/influxdb/**?_time=[2020-01-01T00:00:00Z..2020-01-02T12:00:00.000000000Z]
.
# get points within a relative date interval
/demo/example/influxdb/**?_time=[now(-2d)..now(-1d)]
```
.
See the [`"_time"`
RFC](https://github.com/eclipse-zenoh/roadmap/blob/main/rfcs/ALL/Selectors/_time.md)
for a complete description of the time range format
.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-influxdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-influxdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-influxdb
```
.
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-influxdb
Package: zenoh-backend-rocksdb
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9845
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-rocksdb_0.10.0-rc_amd64.deb
Size: 3034288
MD5sum: dadc9b33141790f3f340a95eed1717af
SHA1: 65ce7d028b7f14e1282284d42ca3824bd8bb7565
SHA256: 8e27971932bbd8237655bef4a054669eff0547e9737ce12cd7f0ffc889ca77a1
SHA512: cb8e7b370b388c6dc88f2affd805e08779ad086c1bbf9b114802e814de810380d20e8ec9b9021a589ead74a8f43458cccc79a779d35c0be6c74ba872442ea97b
Homepage: http://zenoh.io
Description: Backend for Zenoh using RocksDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-rocksdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# RocksDB backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on [RocksDB](https://rocksdb.org/) to implement the
storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_rocksdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_rocksdb` library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_ROCKSDB_ROOT` environment variable to the
directory where you want the RocksDB databases
to be stored. If you don't declare it, the `~/.zenoh/zenoh_backend_rocksdb`
directory will be used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storages" plugin:
storage_manager: {
volumes: {
// configuration of a "rocksdb" volume (the "zenoh_backend_rocksdb"
backend library will be loaded at startup)
rocksdb: {}
},
storages: {
// configuration of a "demo" storage using the "rocksdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "demo/example",
volume: {
id: "rocksdb",
// the RocksDB database will be stored in this directory
(relative to ${ZENOH_BACKEND_ROCKSDB_ROOT})
dir: "example",
// create the RocksDB database if not already existing
create_db: true
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "rocksdb" backend (the "zenoh_backend_rocksdb" library will be
loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/rocksdb`
- Add the "demo" storage using the "rocksdb" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example",volume: {id:
"rocksdb",dir: "example",create_db: true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored in the RocksDB database
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a RocksDB-backed volume must specify some additional
configuration as shown [above](#setup-via-a-json5-configuration-file):
- **`"dir"`** (**required**, string) : The name of directory where the RocksDB
database is stored.
The absolute path will be `${ZENOH_BACKEND_ROCKSDB_ROOT}/`.
.
- **`"create_db"`** (optional, boolean) : create the RocksDB database if not
already existing. Not set by default.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"read_only"`** (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't put anything
in RocksDB database. Not set by default. *(the value doesn't matter, only the
property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 2 options:
- *unset*: the database remains untouched (this is the default behaviour)
- `"destroy_db"`: the database is destroyed (i.e. removed)
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to RocksDB database
Each **storage** will map to a RocksDB database stored in directory:
`${ZENOH_BACKEND_ROCKSDB_ROOT}/`, where:
* `${ZENOH_BACKEND_ROCKSDB_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_rocksdb`
will be used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to 2 **key/values** in
the database:
* For both, the database key is the zenoh key, stripped from the
`"strip_prefix"` property specified at storage creation.
* In the `"default"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with the zenoh encoded value as a value.
* In the `"data_info"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with a bytes buffer encoded in this order:
- the Timestamp encoded as: 8 bytes for the time + 16 bytes for the HLC
ID
- a "is deleted" flag encoded as a boolean on 1 byte
- the encoding prefix flag encoded as a ZInt (variable length)
- the encoding suffix encoded as a String (string length as a ZInt +
string bytes without ending `\0`)
.
### Behaviour on deletion
On deletion of a key, the corresponding key is removed from the `"default"`
Column Family. An entry with the
"deletion" flag set to true and the deletion timestamp is inserted in the
`"data-info"` Column Family
(to avoid re-insertion of points with an older timestamp in case of un-ordered
messages).
At regular interval, a task cleans-up the `"data-info"` Column Family from
entries with old timestamps and
the "deletion" flag set to true
.
### Behaviour on GET
On GET operations:
* if the selector is a unique key (i.e. not containing any `'*'`): the value
and its encoding and timestamp
for the corresponding key are directly retrieved from the 2 Column Families
using `get` RocksDB operation.
* if the selector is a key expression: the storage searches for matching
keys, leveraging RocksDB's [Prefix
Seek](https://github.com/facebook/rocksdb/wiki/Prefix-Seek) if possible to
minimize the number of entries to check.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-rocksdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-rocksdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-rocksdb
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Package: zenoh-backend-rocksdb
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 8633
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-rocksdb_0.10.0-rc_arm64.deb
Size: 2628352
MD5sum: 7cb2aeffb7a6a2bd623d153275223910
SHA1: 37bd7a955b51aa87922457e96bbc8970e270af72
SHA256: fbd3460fa9bf3faf696eafef97abfee996cced9ac8b1890ea36751fb1e4d5c8e
SHA512: 84d409c323ed7390e5973410bdb6ca38a6e296c75817310ccfae1c9f41fd1dffd56b2cac5ba026b7ecf6dcd16f67d2568ebc4fa1f7efb0cd62433148e577a339
Homepage: http://zenoh.io
Description: Backend for Zenoh using RocksDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-rocksdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# RocksDB backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on [RocksDB](https://rocksdb.org/) to implement the
storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_rocksdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_rocksdb` library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_ROCKSDB_ROOT` environment variable to the
directory where you want the RocksDB databases
to be stored. If you don't declare it, the `~/.zenoh/zenoh_backend_rocksdb`
directory will be used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storages" plugin:
storage_manager: {
volumes: {
// configuration of a "rocksdb" volume (the "zenoh_backend_rocksdb"
backend library will be loaded at startup)
rocksdb: {}
},
storages: {
// configuration of a "demo" storage using the "rocksdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "demo/example",
volume: {
id: "rocksdb",
// the RocksDB database will be stored in this directory
(relative to ${ZENOH_BACKEND_ROCKSDB_ROOT})
dir: "example",
// create the RocksDB database if not already existing
create_db: true
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "rocksdb" backend (the "zenoh_backend_rocksdb" library will be
loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/rocksdb`
- Add the "demo" storage using the "rocksdb" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example",volume: {id:
"rocksdb",dir: "example",create_db: true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored in the RocksDB database
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a RocksDB-backed volume must specify some additional
configuration as shown [above](#setup-via-a-json5-configuration-file):
- **`"dir"`** (**required**, string) : The name of directory where the RocksDB
database is stored.
The absolute path will be `${ZENOH_BACKEND_ROCKSDB_ROOT}/`.
.
- **`"create_db"`** (optional, boolean) : create the RocksDB database if not
already existing. Not set by default.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"read_only"`** (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't put anything
in RocksDB database. Not set by default. *(the value doesn't matter, only the
property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 2 options:
- *unset*: the database remains untouched (this is the default behaviour)
- `"destroy_db"`: the database is destroyed (i.e. removed)
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to RocksDB database
Each **storage** will map to a RocksDB database stored in directory:
`${ZENOH_BACKEND_ROCKSDB_ROOT}/`, where:
* `${ZENOH_BACKEND_ROCKSDB_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_rocksdb`
will be used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to 2 **key/values** in
the database:
* For both, the database key is the zenoh key, stripped from the
`"strip_prefix"` property specified at storage creation.
* In the `"default"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with the zenoh encoded value as a value.
* In the `"data_info"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with a bytes buffer encoded in this order:
- the Timestamp encoded as: 8 bytes for the time + 16 bytes for the HLC
ID
- a "is deleted" flag encoded as a boolean on 1 byte
- the encoding prefix flag encoded as a ZInt (variable length)
- the encoding suffix encoded as a String (string length as a ZInt +
string bytes without ending `\0`)
.
### Behaviour on deletion
On deletion of a key, the corresponding key is removed from the `"default"`
Column Family. An entry with the
"deletion" flag set to true and the deletion timestamp is inserted in the
`"data-info"` Column Family
(to avoid re-insertion of points with an older timestamp in case of un-ordered
messages).
At regular interval, a task cleans-up the `"data-info"` Column Family from
entries with old timestamps and
the "deletion" flag set to true
.
### Behaviour on GET
On GET operations:
* if the selector is a unique key (i.e. not containing any `'*'`): the value
and its encoding and timestamp
for the corresponding key are directly retrieved from the 2 Column Families
using `get` RocksDB operation.
* if the selector is a key expression: the storage searches for matching
keys, leveraging RocksDB's [Prefix
Seek](https://github.com/facebook/rocksdb/wiki/Prefix-Seek) if possible to
minimize the number of entries to check.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-rocksdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-rocksdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-rocksdb
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Package: zenoh-backend-rocksdb
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 8288
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-rocksdb_0.10.0-rc_armel.deb
Size: 2579416
MD5sum: 41f1351fac1e4dc51eadf726f21868f7
SHA1: e71954b538cd3e719b8b70d4f2d6d69c4201e41d
SHA256: 2b7abefc530621d6944f4b7277e69ab70f8b4998569e6fc620fdd82c4f26b87e
SHA512: 0470373b0d4346cf6bc2e9e263683f0885049bf2fdf716a47f6a89428462f2345e1697231f77ea3fb8ca4609d171f47b2ea8ccbebcaab814cfe7712a6547cf9a
Homepage: http://zenoh.io
Description: Backend for Zenoh using RocksDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-rocksdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# RocksDB backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on [RocksDB](https://rocksdb.org/) to implement the
storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_rocksdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_rocksdb` library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_ROCKSDB_ROOT` environment variable to the
directory where you want the RocksDB databases
to be stored. If you don't declare it, the `~/.zenoh/zenoh_backend_rocksdb`
directory will be used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storages" plugin:
storage_manager: {
volumes: {
// configuration of a "rocksdb" volume (the "zenoh_backend_rocksdb"
backend library will be loaded at startup)
rocksdb: {}
},
storages: {
// configuration of a "demo" storage using the "rocksdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "demo/example",
volume: {
id: "rocksdb",
// the RocksDB database will be stored in this directory
(relative to ${ZENOH_BACKEND_ROCKSDB_ROOT})
dir: "example",
// create the RocksDB database if not already existing
create_db: true
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "rocksdb" backend (the "zenoh_backend_rocksdb" library will be
loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/rocksdb`
- Add the "demo" storage using the "rocksdb" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example",volume: {id:
"rocksdb",dir: "example",create_db: true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored in the RocksDB database
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a RocksDB-backed volume must specify some additional
configuration as shown [above](#setup-via-a-json5-configuration-file):
- **`"dir"`** (**required**, string) : The name of directory where the RocksDB
database is stored.
The absolute path will be `${ZENOH_BACKEND_ROCKSDB_ROOT}/`.
.
- **`"create_db"`** (optional, boolean) : create the RocksDB database if not
already existing. Not set by default.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"read_only"`** (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't put anything
in RocksDB database. Not set by default. *(the value doesn't matter, only the
property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 2 options:
- *unset*: the database remains untouched (this is the default behaviour)
- `"destroy_db"`: the database is destroyed (i.e. removed)
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to RocksDB database
Each **storage** will map to a RocksDB database stored in directory:
`${ZENOH_BACKEND_ROCKSDB_ROOT}/`, where:
* `${ZENOH_BACKEND_ROCKSDB_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_rocksdb`
will be used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to 2 **key/values** in
the database:
* For both, the database key is the zenoh key, stripped from the
`"strip_prefix"` property specified at storage creation.
* In the `"default"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with the zenoh encoded value as a value.
* In the `"data_info"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with a bytes buffer encoded in this order:
- the Timestamp encoded as: 8 bytes for the time + 16 bytes for the HLC
ID
- a "is deleted" flag encoded as a boolean on 1 byte
- the encoding prefix flag encoded as a ZInt (variable length)
- the encoding suffix encoded as a String (string length as a ZInt +
string bytes without ending `\0`)
.
### Behaviour on deletion
On deletion of a key, the corresponding key is removed from the `"default"`
Column Family. An entry with the
"deletion" flag set to true and the deletion timestamp is inserted in the
`"data-info"` Column Family
(to avoid re-insertion of points with an older timestamp in case of un-ordered
messages).
At regular interval, a task cleans-up the `"data-info"` Column Family from
entries with old timestamps and
the "deletion" flag set to true
.
### Behaviour on GET
On GET operations:
* if the selector is a unique key (i.e. not containing any `'*'`): the value
and its encoding and timestamp
for the corresponding key are directly retrieved from the 2 Column Families
using `get` RocksDB operation.
* if the selector is a key expression: the storage searches for matching
keys, leveraging RocksDB's [Prefix
Seek](https://github.com/facebook/rocksdb/wiki/Prefix-Seek) if possible to
minimize the number of entries to check.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-rocksdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-rocksdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-rocksdb
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Package: zenoh-backend-rocksdb
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 6472
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-rocksdb_0.10.0-rc_armhf.deb
Size: 2682244
MD5sum: 6467ff17ed6c7bbccfb96a8d5110f029
SHA1: de7c7f014934fb777104aef1e844d0e7db7db05c
SHA256: 1e0eb67ec9340216ced90de0336c9b4817a3c9a366124d82db2550d7c25c1c92
SHA512: a510fed0db4ad1aa2c3aa24c5a71faa5b85ca5d6066092de71bc1fec1c2b2f02d313afa95354d7378ba4481f13e81607cff1179e592096868803465c2e945a57
Homepage: http://zenoh.io
Description: Backend for Zenoh using RocksDB
.
[](https://github.com/eclipse-zenoh/zenoh-backend-rocksdb/actions?query=workflow%3A%22CI%22)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# RocksDB backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh documentation](http://zenoh.io/docs/manual/backends/) for more
details.
.
This backend relies on [RocksDB](https://rocksdb.org/) to implement the
storages.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`zenoh_backend_rocksdb`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
-------------------------------
## **Examples of usage**
.
Prerequisites:
- You have a zenoh router (`zenohd`) installed, and the
`zenoh_backend_rocksdb` library file is available in `~/.zenoh/lib`.
- Declare the `ZENOH_BACKEND_ROCKSDB_ROOT` environment variable to the
directory where you want the RocksDB databases
to be stored. If you don't declare it, the `~/.zenoh/zenoh_backend_rocksdb`
directory will be used.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
```json5
{
plugins: {
// configuration of "storages" plugin:
storage_manager: {
volumes: {
// configuration of a "rocksdb" volume (the "zenoh_backend_rocksdb"
backend library will be loaded at startup)
rocksdb: {}
},
storages: {
// configuration of a "demo" storage using the "rocksdb" volume
demo: {
// the key expression this storage will subscribes to
key_expr: "demo/example/**",
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "demo/example",
volume: {
id: "rocksdb",
// the RocksDB database will be stored in this directory
(relative to ${ZENOH_BACKEND_ROCKSDB_ROOT})
dir: "example",
// create the RocksDB database if not already existing
create_db: true
}
}
}
},
// Optionally, add the REST plugin
rest: { http_port: 8000 }
}
}
```
- Run the zenoh router with:
`zenohd -c zenoh.json5`
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router, with write permissions to its admin space:
`zenohd --adminspace-permissions rw`
- Add the "rocksdb" backend (the "zenoh_backend_rocksdb" library will be
loaded):
`curl -X PUT -H 'content-type:application/json' -d '{}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/rocksdb`
- Add the "demo" storage using the "rocksdb" backend:
`curl -X PUT -H 'content-type:application/json' -d
'{key_expr:"demo/example/**",strip_prefix:"demo/example",volume: {id:
"rocksdb",dir: "example",create_db: true}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/demo`
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
```bash
# Put values that will be stored in the RocksDB database
curl -X PUT -d "TEST-1" http://localhost:8000/demo/example/test-1
curl -X PUT -d "B" http://localhost:8000/demo/example/a/b
.
# Retrive the values
curl http://localhost:8000/demo/example/**
```
.
-------------------------------
## Volume-specific storage configuration
Storages relying on a RocksDB-backed volume must specify some additional
configuration as shown [above](#setup-via-a-json5-configuration-file):
- **`"dir"`** (**required**, string) : The name of directory where the RocksDB
database is stored.
The absolute path will be `${ZENOH_BACKEND_ROCKSDB_ROOT}/`.
.
- **`"create_db"`** (optional, boolean) : create the RocksDB database if not
already existing. Not set by default.
*(the value doesn't matter, only the property existence is checked)*
.
- **`"read_only"`** (optional, boolean) : the storage will only answer to GET
queries. It will not accept any PUT or DELETE message, and won't put anything
in RocksDB database. Not set by default. *(the value doesn't matter, only the
property existence is checked)*
.
- **`"on_closure"`** (optional, string) : the strategy to use when the Storage
is removed. There are 2 options:
- *unset*: the database remains untouched (this is the default behaviour)
- `"destroy_db"`: the database is destroyed (i.e. removed)
.
-------------------------------
## **Behaviour of the backend**
.
### Mapping to RocksDB database
Each **storage** will map to a RocksDB database stored in directory:
`${ZENOH_BACKEND_ROCKSDB_ROOT}/`, where:
* `${ZENOH_BACKEND_ROCKSDB_ROOT}` is an environment variable that could be
specified before zenoh router startup.
If this variable is not specified `${ZENOH_HOME}/zenoh_backend_rocksdb`
will be used
(where the default value of `${ZENOH_HOME}` is `~/.zenoh`).
* `` is the `"dir"` property specified at storage creation.
Each zenoh **key/value** put into the storage will map to 2 **key/values** in
the database:
* For both, the database key is the zenoh key, stripped from the
`"strip_prefix"` property specified at storage creation.
* In the `"default"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with the zenoh encoded value as a value.
* In the `"data_info"` [Column
Family](https://github.com/facebook/rocksdb/wiki/Column-Families) the key is
put with a bytes buffer encoded in this order:
- the Timestamp encoded as: 8 bytes for the time + 16 bytes for the HLC
ID
- a "is deleted" flag encoded as a boolean on 1 byte
- the encoding prefix flag encoded as a ZInt (variable length)
- the encoding suffix encoded as a String (string length as a ZInt +
string bytes without ending `\0`)
.
### Behaviour on deletion
On deletion of a key, the corresponding key is removed from the `"default"`
Column Family. An entry with the
"deletion" flag set to true and the deletion timestamp is inserted in the
`"data-info"` Column Family
(to avoid re-insertion of points with an older timestamp in case of un-ordered
messages).
At regular interval, a task cleans-up the `"data-info"` Column Family from
entries with old timestamps and
the "deletion" flag set to true
.
### Behaviour on GET
On GET operations:
* if the selector is a unique key (i.e. not containing any `'*'`): the value
and its encoding and timestamp
for the corresponding key are directly retrieved from the 2 Column Families
using `get` RocksDB operation.
* if the selector is a key expression: the storage searches for matching
keys, leveraging RocksDB's [Prefix
Seek](https://github.com/facebook/rocksdb/wiki/Prefix-Seek) if possible to
minimize the number of entries to check.
.
-------------------------------
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-backend-rocksdb/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-rocksdb` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list.d/zenoh.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-rocksdb
```
.
-------------------------------
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Clang](https://clang.llvm.org/) and [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
```bash
$ zenohd --version
The zenoh router v0.6.0-beta.1 built with rustc 1.64.0 (a55dd71d5 2022-09-19)
```
Here, `zenohd` has been built with the rustc version `1.64.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.64.0
```
.
And `zenohd` version corresponds to an un-released commit with id `1f20c86`.
Update the `zenoh` dependency in Cargo.lock with this command:
```bash
$ cargo update -p zenoh --precise 1f20c86
```
.
Then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-rocksdb
Package: zenoh-backend-s3
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 12097
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-s3_0.10.0-rc_amd64.deb
Size: 3043856
MD5sum: da4ad688feed254b43989a6a75c5945d
SHA1: c9f0e47d748faab4d59e44b48bb1dcebdc388f6a
SHA256: ebfddae2d7ce5c3974cf6c201d2b4fcc03db884bf0d892a43521dc31ccaf30c5
SHA512: e11155145782223c64a23a069482eb00ff78acd8a9df56ad6cf2fb898815cad4e2849b61855ec34b02476ef205796b0cddfefcfa29a36ea576aa508948f98430
Homepage: http://zenoh.io
Description: Backend for Zenoh using AWS S3 API
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/vSDSpqnbkm)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
# S3 backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh
documentation](https://zenoh.io/docs/manual/plugin-storage-manager/#backends-and-volumes)
for more details.
.
This backend relies on [Amazon S3](https://aws.amazon.com/s3/?nc1=h_ls) to
implement the storages. It is also compatible to work with
[MinIO](https://min.io/) object storage.
.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`libzenoh_backend_s3`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
---
.
## **Examples of usage**
.
Prerequisites:
.
- You have a zenoh router (`zenohd`) installed, and the `libzenoh_backend_s3`
library file is available in `~/.zenoh/lib`. Alternatively we can set a symlink
to the library, for instance by running:
.
```
ln -s ~/zenoh-backend-s3/target/release/libzenoh_backend_s3.dylib
~/.zenoh/lib/libzenoh_backend_s3.dylib
```
.
- You have an S3 instance running, this could be an AmazonS3 instance or a
MinIO instance.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API
(see https://zenoh.io/docs/manual/plugin-storage-manager/).
.
**Setting up a MinIO instance**
.
In order to run the examples of usage from the following section, it is
convenient to launch a MinIO instance. To launch MinIO on a Docker container
you first, install MinIO with
.
```
docker pull minio/minio
```
.
And then you can use the following command to launch the instance:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data quay.io/minio/minio server data --console-address
':9090'
```
.
If successful, then the console can be accessed on http://localhost:9090.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
.
```json5
{
plugins: {
// Configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
s3: {
// AWS region to which connect (see
https://docs.aws.amazon.com/general/latest/gr/s3.html).
// This field is mandatory if you are going to communicate with an
AWS S3 server and
// optional in case you are working with a MinIO S3 server.
region: "eu-west-1",
.
// Endpoint where the S3 server is located.
// This parameter allows you to specify a custom endpoint when
working with a MinIO S3
// server.
// This field is mandatory if you are working with a MinIO server
and optional in case
// you are working with an AWS S3 server as long as you specified
the region, in which
// case the endpoint will be resolved automatically.
url: "https://s3.eu-west-1.amazonaws.com",
.
// Optional TLS specific parameters to enable HTTPS with MinIO.
Configuration shared by
// all the associated storages.
tls: {
// Certificate authority to authenticate the server.
root_ca_certificate: "/home/user/certificates/minio/ca.pem",
},
},
},
storages: {
// Configuration of a "demo" storage using the S3 volume. Each
storage is associated to a
// single S3 bucket.
s3_storage: {
// The key expression this storage will subscribes to
key_expr: "s3/example/*",
.
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "s3/example",
.
volume: {
// Id of the volume this storage is associated to
id: "s3",
.
// Bucket to which this storage is associated to
bucket: "zenoh-bucket",
.
// The storage attempts to create the bucket, but if the bucket
already exists and is
// owned by you, then with 'reuse_bucket' you can associate that
preexisting bucket to
// the storage, otherwise it will fail.
reuse_bucket: true,
.
// If the storage is read only, it will only handle GET requests
read_only: false,
.
// strategy on storage closure, either `destroy_bucket` or
`do_nothing`
on_closure: "destroy_bucket",
.
private: {
// Credentials for interacting with the S3 bucket
access_key: "SARASAFODNN7EXAMPLE",
secret_key: "asADasWdKALsI/ASDP22NG/pWokSSLqEXAMPLEKEY",
},
},
},
},
},
// Optionally, add the REST plugin
rest: { http_port: 8000 },
},
}
```
.
- Run the zenoh router with:
```
zenohd -c zenoh.json5
```
.
**Volume configuration when working with AWS S3 storage**
.
When working with the AWS S3 storage, the region must be specified following
the region names indicated in the [Amazon Simple Storage Service endpoints and
quotas
](https://docs.aws.amazon.com/general/latest/gr/s3.html) documentation. The url
of the endpoint is not required as the internal endpoint resolver will
automatically
find which one is the endpoint associated to the region specified.
.
All the storages associated to the volume will use the same region.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
// AWS region to which connect
region: "eu-west-1",
}
},
...
}
```
.
**Volume configuration when working with MinIO**
.
Inversely, when working with a MinIO S3 storage, then we need to specify the
endpoint of the storage rather than the region, which will be ignored by the
MinIO server. We can save ourselves to specify the region in that case.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
url: "http://localhost:9000",
}
},
...
}
```
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router:
```
cargo run --bin=zenohd
```
- Add the "s3" backend (the "zenoh_backend_s3" library will be loaded):
```
curl -X PUT -H 'content-type:application/json' -d '{url:
"http://localhost:9000", private: {access_key: "AKIAIOSFODNN7EXAMPLE",
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/s3
```
- Add the "s3_storage" storage using the "s3" backend:
```
curl -X PUT -H 'content-type:application/json' -d '{key_expr:"s3/example/*",
strip_prefix:"s3/example", volume: {id: "s3", bucket: "zenoh-bucket",
create_bucket: true, region: "eu-west-3", on_closure: "do_nothing", private:
{access_key: "AKIAIOSFODNN7EXAMPLE", secret_key:
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage
```
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
.
```bash
# Put values that will be stored in the S3 storage
curl -X PUT -H 'content-type:application/json' -d '{"example_key":
"example_value"}' http://0.0.0.0:8000/s3/example/test
.
# To get the stored object
curl -X GET -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the previous object
curl -X DELETE -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the whole storage and the bucket if configured (note in order for
this test to work, you need to setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage'
.
# To delete the whole volume (note in order for this test to work, you need to
setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/volumes/s3'
```
.
## **Enabling TLS on MinIO**
.
In order to establish secure communication through HTTPS we need to provide a
certificate of the certificate authority that validates the server credentials.
.
TLS certificates can be generated as explained in the [zenoh documentation
using Minica](https://zenoh.io/docs/manual/tls/). When running
.
```
minica --domains localhost
```
.
a private key, a public certificate and a certificate authority certificate is
generated:
.
```
└── certificates
├── localhost
│ ├── cert.pem
│ └── key.pem
├── minica-key.pem
└── minica.pem
```
.
On the config file, we need to specify the `root_ca_certificate` as this will
allow the s3 plugin to validate the MinIO server keys.
Example:
.
```
tls: {
root_ca_certificate: "/home/user/certificates/minio/minica.pem",
},
```
.
Here, the `root_ca_certificate` corresponds to the generated _minica.pem_ file.
.
The _cert.pem_ and _key.pem_ files correspond to the public certificate and
private key respectively. We need to rename them as _public.crt_ and
_private.key_ respectively and store them under the MinIO configuration
directory (as specified in the [MinIO
documentation](https://min.io/docs/minio/linux/operations/network-encryption.html#enabling-tls)).
In case you are using running a docker container as previously shown, then we
will need to mount the folder containing the certificates as a volume;
supposing we stored our certificates under `${HOME}/minio/certs`, we need to
start our container as follows:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data -v ${HOME}/minio/certs:/certs quay.io/minio/minio
server data --certs-dir certs --console-address ':9090'
```
.
Finally the volume configuration should then look like:
.
```
storage_manager: {
volumes: {
s3: {
// Endpoint where the S3 server is located
url: "https://localhost:9000",
.
// Configure TLS specific parameters
tls: {
root_ca_certificate:
"/home/user/certificates/minio_certs/minica.pem",
},
}
},
```
.
_Note: do not forget to modify the endpoint protocol, for instance from
`http://localhost:9090` to `https://localhost:9090`_
.
---
.
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
.
- https://download.eclipse.org/zenoh/zenoh-backend-s3/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-s3` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-s3
```
.
---
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
> built with the exact same Rust version than `zenohd`. Otherwise,
incompatibilities in memory mapping
> of shared types between `zenohd` and the library can lead to a `"SIGSEV"`
crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
.
```bash
$ zenohd --version
zenohd v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0 (84c898d65
2023-04-16)
The zenoh router v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0
(84c898d65 2023-04-16)
```
.
Here, `zenohd` has been built with the rustc version `1.69.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.69.0
```
.
And then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-s3
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-s3
Package: zenoh-backend-s3
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 10890
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-s3_0.10.0-rc_arm64.deb
Size: 2769628
MD5sum: eff8c4419df1201db82d12c75ae6495f
SHA1: 7b72ca7101ba8380076ef66db7e77a73fa4f3c5c
SHA256: 713736d04108f7156d6b8025ab25d47739d2d424b33140e11012fa3029b51245
SHA512: c4d2d7d6de9414043bbca5737f19db96da56de72ac552befa4b4d1de8a1519235dd009066823960cc967973d8ca95372864b9ca13f682e6609868d15a568e69d
Homepage: http://zenoh.io
Description: Backend for Zenoh using AWS S3 API
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/vSDSpqnbkm)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
# S3 backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh
documentation](https://zenoh.io/docs/manual/plugin-storage-manager/#backends-and-volumes)
for more details.
.
This backend relies on [Amazon S3](https://aws.amazon.com/s3/?nc1=h_ls) to
implement the storages. It is also compatible to work with
[MinIO](https://min.io/) object storage.
.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`libzenoh_backend_s3`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
---
.
## **Examples of usage**
.
Prerequisites:
.
- You have a zenoh router (`zenohd`) installed, and the `libzenoh_backend_s3`
library file is available in `~/.zenoh/lib`. Alternatively we can set a symlink
to the library, for instance by running:
.
```
ln -s ~/zenoh-backend-s3/target/release/libzenoh_backend_s3.dylib
~/.zenoh/lib/libzenoh_backend_s3.dylib
```
.
- You have an S3 instance running, this could be an AmazonS3 instance or a
MinIO instance.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API
(see https://zenoh.io/docs/manual/plugin-storage-manager/).
.
**Setting up a MinIO instance**
.
In order to run the examples of usage from the following section, it is
convenient to launch a MinIO instance. To launch MinIO on a Docker container
you first, install MinIO with
.
```
docker pull minio/minio
```
.
And then you can use the following command to launch the instance:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data quay.io/minio/minio server data --console-address
':9090'
```
.
If successful, then the console can be accessed on http://localhost:9090.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
.
```json5
{
plugins: {
// Configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
s3: {
// AWS region to which connect (see
https://docs.aws.amazon.com/general/latest/gr/s3.html).
// This field is mandatory if you are going to communicate with an
AWS S3 server and
// optional in case you are working with a MinIO S3 server.
region: "eu-west-1",
.
// Endpoint where the S3 server is located.
// This parameter allows you to specify a custom endpoint when
working with a MinIO S3
// server.
// This field is mandatory if you are working with a MinIO server
and optional in case
// you are working with an AWS S3 server as long as you specified
the region, in which
// case the endpoint will be resolved automatically.
url: "https://s3.eu-west-1.amazonaws.com",
.
// Optional TLS specific parameters to enable HTTPS with MinIO.
Configuration shared by
// all the associated storages.
tls: {
// Certificate authority to authenticate the server.
root_ca_certificate: "/home/user/certificates/minio/ca.pem",
},
},
},
storages: {
// Configuration of a "demo" storage using the S3 volume. Each
storage is associated to a
// single S3 bucket.
s3_storage: {
// The key expression this storage will subscribes to
key_expr: "s3/example/*",
.
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "s3/example",
.
volume: {
// Id of the volume this storage is associated to
id: "s3",
.
// Bucket to which this storage is associated to
bucket: "zenoh-bucket",
.
// The storage attempts to create the bucket, but if the bucket
already exists and is
// owned by you, then with 'reuse_bucket' you can associate that
preexisting bucket to
// the storage, otherwise it will fail.
reuse_bucket: true,
.
// If the storage is read only, it will only handle GET requests
read_only: false,
.
// strategy on storage closure, either `destroy_bucket` or
`do_nothing`
on_closure: "destroy_bucket",
.
private: {
// Credentials for interacting with the S3 bucket
access_key: "SARASAFODNN7EXAMPLE",
secret_key: "asADasWdKALsI/ASDP22NG/pWokSSLqEXAMPLEKEY",
},
},
},
},
},
// Optionally, add the REST plugin
rest: { http_port: 8000 },
},
}
```
.
- Run the zenoh router with:
```
zenohd -c zenoh.json5
```
.
**Volume configuration when working with AWS S3 storage**
.
When working with the AWS S3 storage, the region must be specified following
the region names indicated in the [Amazon Simple Storage Service endpoints and
quotas
](https://docs.aws.amazon.com/general/latest/gr/s3.html) documentation. The url
of the endpoint is not required as the internal endpoint resolver will
automatically
find which one is the endpoint associated to the region specified.
.
All the storages associated to the volume will use the same region.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
// AWS region to which connect
region: "eu-west-1",
}
},
...
}
```
.
**Volume configuration when working with MinIO**
.
Inversely, when working with a MinIO S3 storage, then we need to specify the
endpoint of the storage rather than the region, which will be ignored by the
MinIO server. We can save ourselves to specify the region in that case.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
url: "http://localhost:9000",
}
},
...
}
```
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router:
```
cargo run --bin=zenohd
```
- Add the "s3" backend (the "zenoh_backend_s3" library will be loaded):
```
curl -X PUT -H 'content-type:application/json' -d '{url:
"http://localhost:9000", private: {access_key: "AKIAIOSFODNN7EXAMPLE",
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/s3
```
- Add the "s3_storage" storage using the "s3" backend:
```
curl -X PUT -H 'content-type:application/json' -d '{key_expr:"s3/example/*",
strip_prefix:"s3/example", volume: {id: "s3", bucket: "zenoh-bucket",
create_bucket: true, region: "eu-west-3", on_closure: "do_nothing", private:
{access_key: "AKIAIOSFODNN7EXAMPLE", secret_key:
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage
```
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
.
```bash
# Put values that will be stored in the S3 storage
curl -X PUT -H 'content-type:application/json' -d '{"example_key":
"example_value"}' http://0.0.0.0:8000/s3/example/test
.
# To get the stored object
curl -X GET -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the previous object
curl -X DELETE -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the whole storage and the bucket if configured (note in order for
this test to work, you need to setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage'
.
# To delete the whole volume (note in order for this test to work, you need to
setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/volumes/s3'
```
.
## **Enabling TLS on MinIO**
.
In order to establish secure communication through HTTPS we need to provide a
certificate of the certificate authority that validates the server credentials.
.
TLS certificates can be generated as explained in the [zenoh documentation
using Minica](https://zenoh.io/docs/manual/tls/). When running
.
```
minica --domains localhost
```
.
a private key, a public certificate and a certificate authority certificate is
generated:
.
```
└── certificates
├── localhost
│ ├── cert.pem
│ └── key.pem
├── minica-key.pem
└── minica.pem
```
.
On the config file, we need to specify the `root_ca_certificate` as this will
allow the s3 plugin to validate the MinIO server keys.
Example:
.
```
tls: {
root_ca_certificate: "/home/user/certificates/minio/minica.pem",
},
```
.
Here, the `root_ca_certificate` corresponds to the generated _minica.pem_ file.
.
The _cert.pem_ and _key.pem_ files correspond to the public certificate and
private key respectively. We need to rename them as _public.crt_ and
_private.key_ respectively and store them under the MinIO configuration
directory (as specified in the [MinIO
documentation](https://min.io/docs/minio/linux/operations/network-encryption.html#enabling-tls)).
In case you are using running a docker container as previously shown, then we
will need to mount the folder containing the certificates as a volume;
supposing we stored our certificates under `${HOME}/minio/certs`, we need to
start our container as follows:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data -v ${HOME}/minio/certs:/certs quay.io/minio/minio
server data --certs-dir certs --console-address ':9090'
```
.
Finally the volume configuration should then look like:
.
```
storage_manager: {
volumes: {
s3: {
// Endpoint where the S3 server is located
url: "https://localhost:9000",
.
// Configure TLS specific parameters
tls: {
root_ca_certificate:
"/home/user/certificates/minio_certs/minica.pem",
},
}
},
```
.
_Note: do not forget to modify the endpoint protocol, for instance from
`http://localhost:9090` to `https://localhost:9090`_
.
---
.
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
.
- https://download.eclipse.org/zenoh/zenoh-backend-s3/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-s3` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-s3
```
.
---
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
> built with the exact same Rust version than `zenohd`. Otherwise,
incompatibilities in memory mapping
> of shared types between `zenohd` and the library can lead to a `"SIGSEV"`
crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
.
```bash
$ zenohd --version
zenohd v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0 (84c898d65
2023-04-16)
The zenoh router v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0
(84c898d65 2023-04-16)
```
.
Here, `zenohd` has been built with the rustc version `1.69.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.69.0
```
.
And then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-s3
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-s3
Package: zenoh-backend-s3
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9674
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-s3_0.10.0-rc_armel.deb
Size: 2425660
MD5sum: 46cc28bafec2e70272e0d19eea5e2034
SHA1: e6f7d2c2758b326d02a0bdadd29fb801cf97e8f8
SHA256: 5385516c87318b08f6f7c496d21d63db3d71fd79dd4f3958bb2e810227bdec54
SHA512: bd7fec80567ceae04b29650d16adbd351d0dac014b479c62dded38b58da02d0dc8c1aaaefb27d2307ded9020ac9b2e1a7ca93273ef54615e6688cb9b111612d0
Homepage: http://zenoh.io
Description: Backend for Zenoh using AWS S3 API
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/vSDSpqnbkm)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
# S3 backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh
documentation](https://zenoh.io/docs/manual/plugin-storage-manager/#backends-and-volumes)
for more details.
.
This backend relies on [Amazon S3](https://aws.amazon.com/s3/?nc1=h_ls) to
implement the storages. It is also compatible to work with
[MinIO](https://min.io/) object storage.
.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`libzenoh_backend_s3`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
---
.
## **Examples of usage**
.
Prerequisites:
.
- You have a zenoh router (`zenohd`) installed, and the `libzenoh_backend_s3`
library file is available in `~/.zenoh/lib`. Alternatively we can set a symlink
to the library, for instance by running:
.
```
ln -s ~/zenoh-backend-s3/target/release/libzenoh_backend_s3.dylib
~/.zenoh/lib/libzenoh_backend_s3.dylib
```
.
- You have an S3 instance running, this could be an AmazonS3 instance or a
MinIO instance.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API
(see https://zenoh.io/docs/manual/plugin-storage-manager/).
.
**Setting up a MinIO instance**
.
In order to run the examples of usage from the following section, it is
convenient to launch a MinIO instance. To launch MinIO on a Docker container
you first, install MinIO with
.
```
docker pull minio/minio
```
.
And then you can use the following command to launch the instance:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data quay.io/minio/minio server data --console-address
':9090'
```
.
If successful, then the console can be accessed on http://localhost:9090.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
.
```json5
{
plugins: {
// Configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
s3: {
// AWS region to which connect (see
https://docs.aws.amazon.com/general/latest/gr/s3.html).
// This field is mandatory if you are going to communicate with an
AWS S3 server and
// optional in case you are working with a MinIO S3 server.
region: "eu-west-1",
.
// Endpoint where the S3 server is located.
// This parameter allows you to specify a custom endpoint when
working with a MinIO S3
// server.
// This field is mandatory if you are working with a MinIO server
and optional in case
// you are working with an AWS S3 server as long as you specified
the region, in which
// case the endpoint will be resolved automatically.
url: "https://s3.eu-west-1.amazonaws.com",
.
// Optional TLS specific parameters to enable HTTPS with MinIO.
Configuration shared by
// all the associated storages.
tls: {
// Certificate authority to authenticate the server.
root_ca_certificate: "/home/user/certificates/minio/ca.pem",
},
},
},
storages: {
// Configuration of a "demo" storage using the S3 volume. Each
storage is associated to a
// single S3 bucket.
s3_storage: {
// The key expression this storage will subscribes to
key_expr: "s3/example/*",
.
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "s3/example",
.
volume: {
// Id of the volume this storage is associated to
id: "s3",
.
// Bucket to which this storage is associated to
bucket: "zenoh-bucket",
.
// The storage attempts to create the bucket, but if the bucket
already exists and is
// owned by you, then with 'reuse_bucket' you can associate that
preexisting bucket to
// the storage, otherwise it will fail.
reuse_bucket: true,
.
// If the storage is read only, it will only handle GET requests
read_only: false,
.
// strategy on storage closure, either `destroy_bucket` or
`do_nothing`
on_closure: "destroy_bucket",
.
private: {
// Credentials for interacting with the S3 bucket
access_key: "SARASAFODNN7EXAMPLE",
secret_key: "asADasWdKALsI/ASDP22NG/pWokSSLqEXAMPLEKEY",
},
},
},
},
},
// Optionally, add the REST plugin
rest: { http_port: 8000 },
},
}
```
.
- Run the zenoh router with:
```
zenohd -c zenoh.json5
```
.
**Volume configuration when working with AWS S3 storage**
.
When working with the AWS S3 storage, the region must be specified following
the region names indicated in the [Amazon Simple Storage Service endpoints and
quotas
](https://docs.aws.amazon.com/general/latest/gr/s3.html) documentation. The url
of the endpoint is not required as the internal endpoint resolver will
automatically
find which one is the endpoint associated to the region specified.
.
All the storages associated to the volume will use the same region.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
// AWS region to which connect
region: "eu-west-1",
}
},
...
}
```
.
**Volume configuration when working with MinIO**
.
Inversely, when working with a MinIO S3 storage, then we need to specify the
endpoint of the storage rather than the region, which will be ignored by the
MinIO server. We can save ourselves to specify the region in that case.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
url: "http://localhost:9000",
}
},
...
}
```
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router:
```
cargo run --bin=zenohd
```
- Add the "s3" backend (the "zenoh_backend_s3" library will be loaded):
```
curl -X PUT -H 'content-type:application/json' -d '{url:
"http://localhost:9000", private: {access_key: "AKIAIOSFODNN7EXAMPLE",
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/s3
```
- Add the "s3_storage" storage using the "s3" backend:
```
curl -X PUT -H 'content-type:application/json' -d '{key_expr:"s3/example/*",
strip_prefix:"s3/example", volume: {id: "s3", bucket: "zenoh-bucket",
create_bucket: true, region: "eu-west-3", on_closure: "do_nothing", private:
{access_key: "AKIAIOSFODNN7EXAMPLE", secret_key:
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage
```
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
.
```bash
# Put values that will be stored in the S3 storage
curl -X PUT -H 'content-type:application/json' -d '{"example_key":
"example_value"}' http://0.0.0.0:8000/s3/example/test
.
# To get the stored object
curl -X GET -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the previous object
curl -X DELETE -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the whole storage and the bucket if configured (note in order for
this test to work, you need to setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage'
.
# To delete the whole volume (note in order for this test to work, you need to
setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/volumes/s3'
```
.
## **Enabling TLS on MinIO**
.
In order to establish secure communication through HTTPS we need to provide a
certificate of the certificate authority that validates the server credentials.
.
TLS certificates can be generated as explained in the [zenoh documentation
using Minica](https://zenoh.io/docs/manual/tls/). When running
.
```
minica --domains localhost
```
.
a private key, a public certificate and a certificate authority certificate is
generated:
.
```
└── certificates
├── localhost
│ ├── cert.pem
│ └── key.pem
├── minica-key.pem
└── minica.pem
```
.
On the config file, we need to specify the `root_ca_certificate` as this will
allow the s3 plugin to validate the MinIO server keys.
Example:
.
```
tls: {
root_ca_certificate: "/home/user/certificates/minio/minica.pem",
},
```
.
Here, the `root_ca_certificate` corresponds to the generated _minica.pem_ file.
.
The _cert.pem_ and _key.pem_ files correspond to the public certificate and
private key respectively. We need to rename them as _public.crt_ and
_private.key_ respectively and store them under the MinIO configuration
directory (as specified in the [MinIO
documentation](https://min.io/docs/minio/linux/operations/network-encryption.html#enabling-tls)).
In case you are using running a docker container as previously shown, then we
will need to mount the folder containing the certificates as a volume;
supposing we stored our certificates under `${HOME}/minio/certs`, we need to
start our container as follows:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data -v ${HOME}/minio/certs:/certs quay.io/minio/minio
server data --certs-dir certs --console-address ':9090'
```
.
Finally the volume configuration should then look like:
.
```
storage_manager: {
volumes: {
s3: {
// Endpoint where the S3 server is located
url: "https://localhost:9000",
.
// Configure TLS specific parameters
tls: {
root_ca_certificate:
"/home/user/certificates/minio_certs/minica.pem",
},
}
},
```
.
_Note: do not forget to modify the endpoint protocol, for instance from
`http://localhost:9090` to `https://localhost:9090`_
.
---
.
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
.
- https://download.eclipse.org/zenoh/zenoh-backend-s3/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-s3` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-s3
```
.
---
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
> built with the exact same Rust version than `zenohd`. Otherwise,
incompatibilities in memory mapping
> of shared types between `zenohd` and the library can lead to a `"SIGSEV"`
crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
.
```bash
$ zenohd --version
zenohd v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0 (84c898d65
2023-04-16)
The zenoh router v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0
(84c898d65 2023-04-16)
```
.
Here, `zenohd` has been built with the rustc version `1.69.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.69.0
```
.
And then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-s3
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-s3
Package: zenoh-backend-s3
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9542
Depends: zenoh-plugin-storage-manager (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-backend-s3_0.10.0-rc_armhf.deb
Size: 2441424
MD5sum: e16f8e18281b0fea6472deab059dbc9e
SHA1: 9a18777bd55f962cee87f9275777e05e8c92f03f
SHA256: 9c58157ce92644558021b78a68a8adc016b56eeaf3fff3162b123be4402fb838
SHA512: 605004f99555822593afd272f4d8e2c50d20b24d049fbd0569d7a3fcc7ce2cb2fd9701fa2ad32b76cddceca6d148d12150a3223c4f2e56805fc0f061252e12b7
Homepage: http://zenoh.io
Description: Backend for Zenoh using AWS S3 API
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/vSDSpqnbkm)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
.
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
# S3 backend
.
In zenoh a backend is a storage technology (such as DBMS, time-series database,
file system...) alowing to store the
keys/values publications made via zenoh and return them on queries.
See the [zenoh
documentation](https://zenoh.io/docs/manual/plugin-storage-manager/#backends-and-volumes)
for more details.
.
This backend relies on [Amazon S3](https://aws.amazon.com/s3/?nc1=h_ls) to
implement the storages. It is also compatible to work with
[MinIO](https://min.io/) object storage.
.
Its library name (without OS specific prefix and extension) that zenoh will
rely on to find it and load it is **`libzenoh_backend_s3`**.
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
---
.
## **Examples of usage**
.
Prerequisites:
.
- You have a zenoh router (`zenohd`) installed, and the `libzenoh_backend_s3`
library file is available in `~/.zenoh/lib`. Alternatively we can set a symlink
to the library, for instance by running:
.
```
ln -s ~/zenoh-backend-s3/target/release/libzenoh_backend_s3.dylib
~/.zenoh/lib/libzenoh_backend_s3.dylib
```
.
- You have an S3 instance running, this could be an AmazonS3 instance or a
MinIO instance.
.
You can setup storages either at zenoh router startup via a configuration file,
either at runtime via the zenoh admin space, using for instance the REST API
(see https://zenoh.io/docs/manual/plugin-storage-manager/).
.
**Setting up a MinIO instance**
.
In order to run the examples of usage from the following section, it is
convenient to launch a MinIO instance. To launch MinIO on a Docker container
you first, install MinIO with
.
```
docker pull minio/minio
```
.
And then you can use the following command to launch the instance:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data quay.io/minio/minio server data --console-address
':9090'
```
.
If successful, then the console can be accessed on http://localhost:9090.
.
### **Setup via a JSON5 configuration file**
.
- Create a `zenoh.json5` configuration file containing:
.
```json5
{
plugins: {
// Configuration of "storage_manager" plugin:
storage_manager: {
volumes: {
s3: {
// AWS region to which connect (see
https://docs.aws.amazon.com/general/latest/gr/s3.html).
// This field is mandatory if you are going to communicate with an
AWS S3 server and
// optional in case you are working with a MinIO S3 server.
region: "eu-west-1",
.
// Endpoint where the S3 server is located.
// This parameter allows you to specify a custom endpoint when
working with a MinIO S3
// server.
// This field is mandatory if you are working with a MinIO server
and optional in case
// you are working with an AWS S3 server as long as you specified
the region, in which
// case the endpoint will be resolved automatically.
url: "https://s3.eu-west-1.amazonaws.com",
.
// Optional TLS specific parameters to enable HTTPS with MinIO.
Configuration shared by
// all the associated storages.
tls: {
// Certificate authority to authenticate the server.
root_ca_certificate: "/home/user/certificates/minio/ca.pem",
},
},
},
storages: {
// Configuration of a "demo" storage using the S3 volume. Each
storage is associated to a
// single S3 bucket.
s3_storage: {
// The key expression this storage will subscribes to
key_expr: "s3/example/*",
.
// this prefix will be stripped from the received key when
converting to database key.
// i.e.: "demo/example/a/b" will be stored as "a/b"
strip_prefix: "s3/example",
.
volume: {
// Id of the volume this storage is associated to
id: "s3",
.
// Bucket to which this storage is associated to
bucket: "zenoh-bucket",
.
// The storage attempts to create the bucket, but if the bucket
already exists and is
// owned by you, then with 'reuse_bucket' you can associate that
preexisting bucket to
// the storage, otherwise it will fail.
reuse_bucket: true,
.
// If the storage is read only, it will only handle GET requests
read_only: false,
.
// strategy on storage closure, either `destroy_bucket` or
`do_nothing`
on_closure: "destroy_bucket",
.
private: {
// Credentials for interacting with the S3 bucket
access_key: "SARASAFODNN7EXAMPLE",
secret_key: "asADasWdKALsI/ASDP22NG/pWokSSLqEXAMPLEKEY",
},
},
},
},
},
// Optionally, add the REST plugin
rest: { http_port: 8000 },
},
}
```
.
- Run the zenoh router with:
```
zenohd -c zenoh.json5
```
.
**Volume configuration when working with AWS S3 storage**
.
When working with the AWS S3 storage, the region must be specified following
the region names indicated in the [Amazon Simple Storage Service endpoints and
quotas
](https://docs.aws.amazon.com/general/latest/gr/s3.html) documentation. The url
of the endpoint is not required as the internal endpoint resolver will
automatically
find which one is the endpoint associated to the region specified.
.
All the storages associated to the volume will use the same region.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
// AWS region to which connect
region: "eu-west-1",
}
},
...
}
```
.
**Volume configuration when working with MinIO**
.
Inversely, when working with a MinIO S3 storage, then we need to specify the
endpoint of the storage rather than the region, which will be ignored by the
MinIO server. We can save ourselves to specify the region in that case.
.
The volumes section on the config file will look like:
.
```
storage_manager {
volumes: {
s3: {
url: "http://localhost:9000",
}
},
...
}
```
.
### **Setup at runtime via `curl` commands on the admin space**
.
- Run the zenoh router:
```
cargo run --bin=zenohd
```
- Add the "s3" backend (the "zenoh_backend_s3" library will be loaded):
```
curl -X PUT -H 'content-type:application/json' -d '{url:
"http://localhost:9000", private: {access_key: "AKIAIOSFODNN7EXAMPLE",
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/volumes/s3
```
- Add the "s3_storage" storage using the "s3" backend:
```
curl -X PUT -H 'content-type:application/json' -d '{key_expr:"s3/example/*",
strip_prefix:"s3/example", volume: {id: "s3", bucket: "zenoh-bucket",
create_bucket: true, region: "eu-west-3", on_closure: "do_nothing", private:
{access_key: "AKIAIOSFODNN7EXAMPLE", secret_key:
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}}}'
http://localhost:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage
```
.
### **Tests using the REST API**
.
Using `curl` to publish and query keys/values, you can:
.
```bash
# Put values that will be stored in the S3 storage
curl -X PUT -H 'content-type:application/json' -d '{"example_key":
"example_value"}' http://0.0.0.0:8000/s3/example/test
.
# To get the stored object
curl -X GET -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the previous object
curl -X DELETE -H {} -d '{}' http://0.0.0.0:8000/s3/example/test
.
# To delete the whole storage and the bucket if configured (note in order for
this test to work, you need to setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/storages/s3_storage'
.
# To delete the whole volume (note in order for this test to work, you need to
setup adminspace read/write permissions)
curl -X DELETE
'http://0.0.0.0:8000/@/router/local/config/plugins/storage_manager/volumes/s3'
```
.
## **Enabling TLS on MinIO**
.
In order to establish secure communication through HTTPS we need to provide a
certificate of the certificate authority that validates the server credentials.
.
TLS certificates can be generated as explained in the [zenoh documentation
using Minica](https://zenoh.io/docs/manual/tls/). When running
.
```
minica --domains localhost
```
.
a private key, a public certificate and a certificate authority certificate is
generated:
.
```
└── certificates
├── localhost
│ ├── cert.pem
│ └── key.pem
├── minica-key.pem
└── minica.pem
```
.
On the config file, we need to specify the `root_ca_certificate` as this will
allow the s3 plugin to validate the MinIO server keys.
Example:
.
```
tls: {
root_ca_certificate: "/home/user/certificates/minio/minica.pem",
},
```
.
Here, the `root_ca_certificate` corresponds to the generated _minica.pem_ file.
.
The _cert.pem_ and _key.pem_ files correspond to the public certificate and
private key respectively. We need to rename them as _public.crt_ and
_private.key_ respectively and store them under the MinIO configuration
directory (as specified in the [MinIO
documentation](https://min.io/docs/minio/linux/operations/network-encryption.html#enabling-tls)).
In case you are using running a docker container as previously shown, then we
will need to mount the folder containing the certificates as a volume;
supposing we stored our certificates under `${HOME}/minio/certs`, we need to
start our container as follows:
.
```
docker run -p 9000:9000 -p 9090:9090 --user $(id -u):$(id -g) --name minio -e
'MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE' -e
'MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' -v
${HOME}/minio/data:/data -v ${HOME}/minio/certs:/certs quay.io/minio/minio
server data --certs-dir certs --console-address ':9090'
```
.
Finally the volume configuration should then look like:
.
```
storage_manager: {
volumes: {
s3: {
// Endpoint where the S3 server is located
url: "https://localhost:9000",
.
// Configure TLS specific parameters
tls: {
root_ca_certificate:
"/home/user/certificates/minio_certs/minica.pem",
},
}
},
```
.
_Note: do not forget to modify the endpoint protocol, for instance from
`http://localhost:9090` to `https://localhost:9090`_
.
---
.
## How to install it
.
To install the latest release of this backend library, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
.
- https://download.eclipse.org/zenoh/zenoh-backend-s3/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download the `.zip` file.
Unzip it in the same directory than `zenohd` or to any directory where it can
find the backend library (e.g. /usr/lib or ~/.zenoh/lib)
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list, and install the
`zenoh-backend-s3` package:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
sudo apt install zenoh-backend-s3
```
.
---
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
At first, install [Cargo and
Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html). If
you already have the Rust toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
backend library should be
> built with the exact same Rust version than `zenohd`. Otherwise,
incompatibilities in memory mapping
> of shared types between `zenohd` and the library can lead to a `"SIGSEV"`
crash.
.
To know the Rust version you're `zenohd` has been built with, use the
`--version` option.
Example:
.
```bash
$ zenohd --version
zenohd v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0 (84c898d65
2023-04-16)
The zenoh router v0.7.0-rc-365-geca888b4-modified built with rustc 1.69.0
(84c898d65 2023-04-16)
```
.
Here, `zenohd` has been built with the rustc version `1.69.0`.
Install and use this toolchain with the following command:
.
```bash
$ rustup default 1.69.0
```
.
And then build the backend with:
.
```bash
$ cargo build --release --all-targets
```
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-backend-s3
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-backend-s3
Package: zenoh-bridge-dds
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 10899
Depends: libc6 (>= 2.29)
Filename: ./0.10.0-rc/zenoh-bridge-dds_0.10.0-rc_amd64.deb
Size: 3455680
MD5sum: 01c44059e6cf73ac1c751a0178398444
SHA1: ffb3376ac75995491d8b1d65ab44c2bd27cec339
SHA256: ea4cf37aec6f6bdeec1a30026509f16307d10d95b555e8ca677ff5c18978c864
SHA512: b8f58ac8879c9dc0850a214ca9a4315585830a1e839d042224b77f8cd3b8e57b062e2c94b3b37fd6596d4b635b92707ff223444fb72acce92250bef1ed681c19
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS2 and DDS in general
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-bridge-dds
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9530
Depends: libc6:arm64 (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-dds_0.10.0-rc_arm64.deb
Size: 3135084
MD5sum: 928b61d534e73492de48c571f841cb2f
SHA1: fef0e4d4b6d7d0ba24de653bf279a8bb9cf41187
SHA256: e5f1a2be1f9a762c323033b2e73c0984f5a3d2d38cf637af913c8e14839c4a14
SHA512: 345971ea09db57569e2260dba2fca19697d995f689bbd5bd6065ea74bdb6cd4d5228d9ae64a2f245fbe921c458294997f4ba10e437280109881e0f9dd1f878f5
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS2 and DDS in general
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-bridge-dds
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9632
Depends: libc6:armel (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-dds_0.10.0-rc_armel.deb
Size: 3077192
MD5sum: 39d7a69f43909648fa02ddf7e05ee3dc
SHA1: 1108e3aacaa4751265608003b817343486bafe67
SHA256: ef78a43d69c0b2a0f93b6df6118052af104b2fc65d60c02edd9dc6f3a3eaabf8
SHA512: 57e41ad740509c6035f586e37410d1555e3778798f33df52a5c7732a0bf3c617c40d750ebc9cd35a830054d96a0227aaa7a4044a9cca8943e5190d4b4830369e
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS2 and DDS in general
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-bridge-dds
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9164
Depends: libc6:armhf (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-dds_0.10.0-rc_armhf.deb
Size: 3082172
MD5sum: 8bf8cb4661544e87673e402d17020a41
SHA1: 330664c2d4799aae55db3bdd7d1e96995ff36022
SHA256: 24012aa34ecce88c02058fbf02b4fdd617b93d2e6bbb4d16a707dfeeb0055e21
SHA512: ab22d04c5010c447876ac510545776b2b1f6e23e0b5db75ec4c71cf7d3cc627af65d33adfc696e8b22bc29e8d624bbd114908aab12dda9d8e1cf715ba67e53a8
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS2 and DDS in general
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-bridge-mqtt
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 10028
Depends: libc6 (>= 2.29)
Filename: ./0.10.0-rc/zenoh-bridge-mqtt_0.10.0-rc_amd64.deb
Size: 3141324
MD5sum: 7f06a9f21d53805e0735b9b06f43f09a
SHA1: a4963f2d1d115750c8b5e0649a5bebe19c31e103
SHA256: 4ca54fc05f432afbb089c638472e80b1a27409307aae2aa8a74122e677ca8b54
SHA512: 7771039441381e9a41e0dd2849714b550875cd39fd69ee306d22c502e8b3497185ed8f6fcb09df583de368c238bf92693fdc2bff9fe667d442c22f7187f2ab82
Homepage: http://zenoh.io
Description: Zenoh bridge for MQTT
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-mqtt
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-mqtt
Package: zenoh-bridge-mqtt
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 8684
Depends: libc6:arm64 (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-mqtt_0.10.0-rc_arm64.deb
Size: 2845720
MD5sum: dba296f3356241858e390dc7b3a64ca5
SHA1: 83e88cf0f08bbf16386a0b389e1c33e81f2a77d1
SHA256: 831b25812a2f809c0d995122b83c384560afb351ee706354f01f3c8eda717e56
SHA512: 040ba5b5116eeb3ba1df9f94f45b8ff79c1aa4f68c7bd423e2020b8174d4b9a8324ea058dec37fff6cf65c0814dbcf9a25450af9d51a36ee8fd2e537ba935759
Homepage: http://zenoh.io
Description: Zenoh bridge for MQTT
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-mqtt
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-mqtt
Package: zenoh-bridge-ros1
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 11276
Depends: libc6 (>= 2.29)
Filename: ./0.10.0-rc/zenoh-bridge-ros1_0.10.0-rc_amd64.deb
Size: 3391332
MD5sum: 52b53496825c9fe8350aeabc089834c9
SHA1: 0c7d629bc96287b7c54c26afc558fe1b5ea8f4be
SHA256: a2aeb719c59c9ee30f4699a280db9eeef697da06e7c40acf23bbf88c02228bb4
SHA512: 0d3f41c2e6ab0d7e5ae0293a91728314ead9b94a97a584e0637ba21cf4a8125e2f7843a92a7f15a91bd97bba4b7cf777eff42b98d9b283c638dc9652ace14656
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS1
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Package: zenoh-bridge-ros1
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9744
Depends: libc6:arm64 (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-ros1_0.10.0-rc_arm64.deb
Size: 3050764
MD5sum: 6f5a759c3bb4774135f14dda77c4ca09
SHA1: b4d88dcb18dc5d88304eab9626ce3af8474a50ee
SHA256: 17db2d020af61bc553bc0c2e7616ec626ba6f86e748e77570064219e2ec04564
SHA512: a3be39ab1f55f06d2e244712b6241ac44f0a9c441c5a80635a42a6eb4a05f355bb7d91c45a800447526f8fade813db9da539dc871ca532b31c77ea96ba2e5eac
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS1
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Package: zenoh-bridge-ros1
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9918
Depends: libc6:armel (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-ros1_0.10.0-rc_armel.deb
Size: 3033252
MD5sum: dec263191740bd70d4a11a29eb6ee025
SHA1: e73b98d1e7e31165f9fdb355cf33211b8ece7b0c
SHA256: 4b9961c979e13015cff2eec98fcad662a85a2384cef24aa3c1579eb5ef065ca6
SHA512: 6038be04c879b391dc446bb2aaf8471f157a43563429d15cc1b058717ad563204dbed98395da0e48abfadd41714aea2ec5f9eca1c60df5bf507f204272e64538
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS1
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Package: zenoh-bridge-ros1
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 9658
Depends: libc6:armhf (>= 2.31)
Filename: ./0.10.0-rc/zenoh-bridge-ros1_0.10.0-rc_armhf.deb
Size: 3022472
MD5sum: e39734f898011bfe56429a2eebc53530
SHA1: 7a9b3a2495d3a82676604312194df26f1108952f
SHA256: 09bc979e9d93e69d8f14fd4f1e479dc6a38241a80bbd8bdba46f4cd67442a528
SHA512: 3211e0c39cd52950ebcd116c69e5fe092e87b12743c31e631d83f65fa76a212fed88c2c5249ff5c2d85b71555f1d813b6806a265b4998dda36be893019aaaba3
Homepage: http://zenoh.io
Description: Zenoh bridge for ROS1
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-ros1
Package: zenoh-cpp
Architecture: x86_64
Version: 0.10.0.1
Priority: optional
Section: devel
Maintainer: ZettaScale Zenoh Team,
Installed-Size: 175
Filename: ./0.10.0-rc/zenoh-cpp-0.10.0.1.deb
Size: 25072
MD5sum: 7e547a46dff4b8841100db16cda9b510
SHA1: a12e42b96fbf45675ba22f2f6045fa851023725d
SHA256: faacd7427bf7f22e6441044ebb518dc0b058f264cab2c5ee359c65b172fd6640
SHA512: 7f98b28670fedb1ca073af755ec916cbe563fcf7821da4c720df311db33c974366c3f711f35b776c43a336877d27a851d65eac65ac9f2ed6f9d9ef37d83d717e
Homepage: https://github.com/eclipse-zenoh/zenoh-cpp
Description: C++ bindings for Zenoh
Package: zenoh-plugin-dds
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 4493
Depends: zenohd (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-plugin-dds_0.10.0-rc_amd64.deb
Size: 1379700
MD5sum: 7f1426d7e145690366499e862b2df140
SHA1: 4db854f1b1a2f3d03a8caa2ed5abfd295337c622
SHA256: 10dba9e7d027d83a1593faceaeb401220499947ccdb5637bb8598f9aa796cc50
SHA512: 0d6a728f13f2382b683ce4943507329bdf9bda1bec7174be2901f363d49b59d66ca5e09413e834464a6a8708fcd2d27a69fd716df9297c613ee84871a66de501
Homepage: http://zenoh.io
Description: Zenoh plugin for ROS2 and DDS in general
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# DDS plugin and standalone `zenoh-bridge-dds`
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Docker image:** see [below](#Docker-image)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
## Background
The Data Distribution Service (DDS) is a standard for data-centric publish
subscribe. Whilst DDS has been around for quite some time and has a long
history of deployments in various industries, it has recently gained quite a
bit of attentions thanks to its adoption by the Robotic Operating System (ROS2)
-- where it is used for communication between ROS2 nodes.
.
## Robot Swarms and Edge Robotics
As mentioned above, ROS2 has adopted DDS as the mechanism to exchange data
between nodes within and potentially across a robot. That said, due to some of
the very core assumptions at the foundations of the DDS wire-protocol, beside
the fact that it leverages UDP/IP multicast for communication, it is not so
straightforward to scale DDS communication over a WAN or across multiple LANs.
Zenoh, on the other hand was designed since its inception to operate at
Internet Scale.
.

.
Thus, the main motivations to have a **DDS plugin** for **Eclipse zenoh** are:
.
- Facilitate the interconnection of robot swarms.
- Support use cases of edge robotics.
- Give the possibility to use **zenoh**'s geo-distributed storage and query
system to better manage robot's data.
.
As any plugin for Eclipse zenoh, it can be dynamically loaded by a zenoh
router, at startup or at runtime.
In addition, this project also provides a standalone version of this plugin as
an executable binary named `zenoh-bridge-dds`.
.
## How to install it
.
To install the latest release of either the DDS plugin for the Zenoh router,
either the `zenoh-bridge-dds` standalone executable, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-plugin-dds/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download:
- the `zenoh-plugin-dds--.zip` file for the plugin.
Then unzip it in the same directory than `zenohd` or to any directory where
it can find the plugin library (e.g. /usr/lib)
- the `zenoh-bridge-dds--.zip` file for the standalone
executable.
Then unzip it where you want, and run the extracted `zenoh-bridge-dds`
binary.
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
```
Then either:
- install the plugin with: `sudo apt install zenoh-plugin-dds`.
- install the standalone executable with: `sudo apt install
zenoh-bridge-dds`.
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
plugins should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
In order to build the zenoh bridge for DDS you need first to install the
following dependencies:
.
- [Rust](https://www.rust-lang.org/tools/install). If you already have the Rust
toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
- On Linux, make sure the `llvm` and `clang` development packages are
installed:
- on Debians do: `sudo apt install llvm-dev libclang-dev`
- on CentOS or RHEL do: `sudo yum install llvm-devel clang-devel`
- on Alpine do: `apk install llvm11-dev clang-dev`
- [CMake](https://cmake.org/download/) (to build CycloneDDS which is a native
dependency)
.
Once these dependencies are in place, you may clone the repository on your
machine:
.
```bash
$ git clone https://github.com/eclipse-zenoh/zenoh-plugin-dds.git
$ cd zenoh-plugin-dds
```
> :warning: **WARNING** :warning: : On Linux, don't use `cargo build` command
without specifying a package with `-p`. Building both `zenoh-plugin-dds`
(plugin library) and `zenoh-bridge-dds` (standalone executable) together will
lead to a `multiple definition of `load_plugin'` error at link time. See
[#117](https://github.com/eclipse-zenoh/zenoh-plugin-dds/issues/117#issuecomment-1439694331)
for explanations.
.
You can then choose between building the zenoh bridge for DDS:
- as a plugin library that can be dynamically loaded by the zenoh router
(`zenohd`):
```bash
$ cargo build --release -p zenoh-plugin-dds
```
The plugin shared library (`*.so` on Linux, `*.dylib` on Mac OS, `*.dll` on
Windows) will be generated in the `target/release` subdirectory.
.
- or as a standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds
```
The **`zenoh-bridge-dds`** binary will be generated in the `target/release`
sub-directory.
.
.
### Enabling Cyclone DDS Shared Memory Support
.
Cyclone DDS Shared memory support is provided by the [Iceoryx
library](https://iceoryx.io/). Iceoryx introduces additional system
requirements which are documented
[here](https://iceoryx.io/v2.0.1/getting-started/installation/#dependencies).
.
To build the zenoh bridge for DDS with support for shared memory the `dds_shm`
optional feature must be enabled during the build process as follows:
- plugin library:
```bash
$ cargo build --release -p zenoh-plugin-dds --features dds_shm
```
.
- standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds --features dds_shm
```
.
**Note:** Iceoryx does not need to be installed to build the bridge when the
`dds_shm` feature is enabled. Iceoryx will be automatically downloaded,
compiled, and statically linked into the zenoh bridge as part of the cargo
build process.
.
When the zenoh bridge is configured to use DDS shared memory (see
[Configuration](#configuration)) the **Iceoryx RouDi daemon (`iox-roudi`)**
must be running in order for the bridge to start successfully. If not started
the bridge will wait for a period of time for the daemon to become available
before timing out and terminating.
.
When building the zenoh bridge with the `dds_shm` feature enabled the
`iox-roudi` daemon is also built for convenience. The daemon can be found under
`target/debug|release/build/cyclors-/out/iceoryx-build/bin/iox-roudi`.
.
See
[here](https://cyclonedds.io/docs/cyclonedds/latest/shared_memory/shared_memory.html)
for more details of shared memory support in Cyclone DDS.
.
.
### ROS2 package
If you're a ROS2 user, you can also build `zenoh-bridge-dds` as a ROS package
running:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release
```
The `rosdep` command will automatically install *Rust* and *clang* as build
dependencies.
.
If you want to cross-compile the package on x86 device for any target, you can
use the following command:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release --cmake-args -DCROSS_ARCH=
```
where `` is the target architecture (e.g. `aarch64-unknown-linux-gnu`).
The architechture list can be found
[here](https://doc.rust-lang.org/nightly/rustc/platform-support.html).
.
The cross-compilation uses `zig` as a linker. You can install it with
instructions in [here](https://ziglang.org/download/). Also, the `zigbuild`
package is required to be installed on the target device. You can install it
with instructions in
[here](https://github.com/rust-cross/cargo-zigbuild#installation).
.
## Docker image
The **`zenoh-bridge-dds`** standalone executable is also available as a [Docker
images](https://hub.docker.com/r/eclipse/zenoh-bridge-dds/tags?page=1&ordering=last_updated)
for both amd64 and arm64. To get it, do:
- `docker pull eclipse/zenoh-bridge-dds:latest` for the latest release
- `docker pull eclipse/zenoh-bridge-dds:master` for the master branch version
(nightly build)
.
:warning: **However, notice that it's usage is limited to Docker on Linux and
using the `--net host` option.**
The cause being that DDS uses UDP multicast and Docker doesn't support UDP
multicast between a container and its host (see cases
[moby/moby#23659](https://github.com/moby/moby/issues/23659),
[moby/libnetwork#2397](https://github.com/moby/libnetwork/issues/2397) or
[moby/libnetwork#552](https://github.com/moby/libnetwork/issues/552)). The only
known way to make it work is to use the `--net host` option that is [only
supported on Linux hosts](https://docs.docker.com/network/host/).
.
Usage: **`docker run --init --net host eclipse/zenoh-bridge-dds`**
It supports the same command line arguments than the `zenoh-bridge-dds` (see
below or check with `-h` argument).
.
## For a quick test with ROS2 turtlesim
Prerequisites:
- A [ROS2 environment](http://docs.ros.org/en/galactic/Installation.html) (no
matter the DDS implementation as soon as it implements the standard DDSI
protocol - the default [Eclipse
CycloneDDS](https://github.com/eclipse-cyclonedds/cyclonedds) being just fine)
- The [turtlesim
package](http://docs.ros.org/en/galactic/Tutorials/Turtlesim/Introducing-Turtlesim.html#install-turtlesim)
.
### _1 host, 2 ROS domains_
For a quick test on a single host, you can run the `turtlesim_node` and the
`turtle_teleop_key` on distinct ROS domains. As soon as you run 2
`zenoh-bridge-dds` (1 per domain) the `turtle_teleop_key` can drive the
`turtlesim_node`.
Here are the commands to run:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 1`
- `./target/release/zenoh-bridge-dds -d 2`
.
Notice that by default the 2 bridges will discover each other using UDP
multicast.
.
### _2 hosts, avoiding UDP multicast communication_
By default DDS (and thus ROS2) uses UDP multicast for discovery and
publications. But on some networks, UDP multicast is not or badly supported.
In such cases, deploying the `zenoh-bridge-dds` on both hosts will make it to:
- limit the DDS discovery traffic, as detailled in [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
- route all the DDS publications made on UDP multicast by each node through
the zenoh protocol that by default uses TCP.
.
Here are the commands to test this configuration with turtlesim:
- on host 1:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -d 1 -l tcp/0.0.0.0:7447`
- on host 2:
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 2 -e tcp/:7447` - where
`` is the IP of host 1
.
Notice that to avoid unwanted direct DDS communication, 2 disctinct ROS domains
are still used.
.
### _2 hosts, with an intermediate zenoh router in the cloud_
In case your 2 hosts can't have a point-to-point communication, you could
leverage a [zenoh
router](https://github.com/eclipse-zenoh/zenoh#how-to-build-it) deployed in a
cloud instance (any Linux VM will do the job). You just need to configure your
cloud instanse with a public IP and authorize the TCP port **7447**.
.
:warning: the zenoh protocol is still under development leading to possible
incompatibilities between the bridge and the router if their zenoh version
differ. Please make sure you use a zenoh router built from a recent commit id
from its `master` branch.
.
Here are the commands to test this configuration with turtlesim:
- on cloud VM:
- `zenohd`
- on host 1:
- `ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
- on host 2:
- `ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
.
Notice that there is no need to use distinct ROS domain here, since the 2 hosts
are not supposed to directly communicate with each other.
.
## More advanced usage for ROS2
### _Full support of ROS graph and topic lists via the forward discovery mode_
By default the bridge doesn't route throught zenoh the DDS discovery traffic to
the remote bridges.
Meaning that, in case you use 2 **`zenoh-bridge-dds`** to interconnect 2 DDS
domains, the DDS entities discovered in one domain won't be advertised in the
other domain. Thus, the DDS data will be routed between the 2 domains only if
matching readers and writers are declared in the 2 domains independently.
.
This default behaviour has an impact on ROS2 behaviour: on one side of the
bridge the ROS graph might not reflect all the nodes from the other side of the
bridge. The `ros2 topic list` command might not list all the topics declared on
the other side. And the **ROS graph** is limited to the nodes in each domain.
.
But using the **`--fwd-discovery`** (or `-f`) option for all bridges make them
behave differently:
- each bridge will forward via zenoh the local DDS discovery data to the
remote bridges (in a more compact way than the original DDS discovery traffic)
- each bridge receiving DDS discovery data via zenoh will create a replica of
the DDS reader or writer, with similar QoS. Those replicas will serve the route
to/from zenoh, and will be discovered by the ROS2 nodes.
- each bridge will forward the `ros_discovery_info` data (in a less intensive
way than the original publications) to the remote bridges. On reception, the
remote bridges will convert the original entities' GIDs into the GIDs of the
corresponding replicas, and re-publish on DDS the `ros_discovery_info`. The
full ROS graph can then be discovered by the ROS2 nodes on each host.
### _Limiting the ROS2 topics, services, parameters or actions to be routed_
By default 2 zenoh bridges will route all ROS2 topics and services for which
they detect a Writer on one side and a Reader on the other side. But you might
want to avoid some topics and services to be routed by the bridge.
.
Starting `zenoh-bridge-dds` you can use the `--allow` argument to specify the
subset of topics and services that will be routed by the bridge. This argument
accepts a string wich is a regular expression that must match a substring of an
allowed zenoh key (see details of [mapping of ROS2 names to zenoh
keys](#mapping-ros2-names-to-zenoh-keys)).
.
Here are some examples of usage:
| `--allow` value | allowed ROS2 communication |
| :-- | :-- |
| `/rosout` | `/rosout`|
| `/rosout\|/turtle1/cmd_vel\|/turtle1/rotate_absolute` |
`/rosout`
`/turtle1/cmd_vel`
`/turtle1/rotate_absolute` |
| `/rosout\|/turtle1/` | `/rosout` and all `/turtle1` topics, services,
parameters and actions |
| `/turtle1/.*` | all topics and services with name containing `/turtle1/` |
| `/turtle1/` | same: all topics, services, parameters and actions with name
containing `/turtle1/` |
| `rt/turtle1` | all topics with name containing `/turtle1` (no services,
parameters or actions) |
| `rq/turtle1\|/rr/turtle1` | all services and parameters with name containing
`/turtle1` (no topics or actions) |
| `rq/turtlesim/.*parameter\|/rr/turtlesim/.*parameter` | all parameters with
name containing `/turtlesim` (no topics, services or actions) |
| `rq/turtle1/.*/_action\|/rr/turtle1/.*/_action` | all actions with name
containing `/turtle1` (no topics, services or parameters) |
.
### _Running several robots without changing the ROS2 configuration_
If you run similar robots in the same network, they will by default all us the
same DDS topics, leading to interferences in their operations.
A simple way to address this issue using the zenoh bridge is to:
- deploy 1 zenoh bridge per robot
- have each bridge started with the `--scope "/"` argument, each robot
having its own id.
- make sure each robot cannot directly communicate via DDS with another robot
by setting a distinct domain per robot, or configuring its network interface to
not route UDP multicast outside the host.
.
Using the `--scope` option, a prefix is added to each zenoh key
published/subscribed by the bridge (more details in [mapping of ROS2 names to
zenoh keys](#mapping-ros2-names-to-zenoh-keys)). To interact with a robot, a
remote ROS2 application must use a zenoh bridge configured with the same scope
than the robot.
.
### _Closer integration of ROS2 with zenoh_
As you understood, using the zenoh bridge, each ROS2 publications and
subscriptions are mapped to a zenoh key. Therefore, its relatively easy to
develop an application using one of the [zenoh
APIs](https://zenoh.io/docs/apis/apis/) to interact with one or more robot at
the same time.
.
See in details how to achieve that in [this
blog](https://zenoh.io/blog/2021-04-28-ros2-integration/).
.
## Configuration
.
`zenoh-bridge-dds` can be configured via a JSON5 file passed via the
`-c`argument. You can see a commented example of such configuration file:
[`DEFAULT_CONFIG.json5`](DEFAULT_CONFIG.json5).
.
The `"dds"` part of this same configuration file can also be used in the
configuration file for the zenoh router (within its `"plugins"` part). The
router will automatically try to load the plugin library (`zenoh-plugin_dds`)
at startup and apply its configuration.
.
`zenoh-bridge-dds` also accepts the following arguments. If set, each argument
will override the similar setting from the configuration file:
* zenoh-related arguments:
- **`-c, --config `** : a config file
- **`-m, --mode `** : The zenoh session mode. Default: `peer` Possible
values: `peer` or `client`.
See [zenoh
documentation](https://zenoh.io/docs/getting-started/key-concepts/#deployment-units)
for more details.
- **`-l, --listen `** : A locator on which this router will listen
for incoming sessions. Repeat this option to open several listeners. Example of
locator: `tcp/localhost:7447`.
- **`-e, --peer `** : A peer locator this router will try to
connect to (typically another bridge or a zenoh router). Repeat this option to
connect to several peers. Example of locator: `tcp/:7447`.
- **`--no-multicast-scouting`** : disable the zenoh scouting protocol that
allows automatic discovery of zenoh peers and routers.
- **`-i, --id `** : The identifier (as an hexadecimal string -
e.g.: 0A0B23...) that the zenoh bridge must use. **WARNING: this identifier
must be unique in the system!** If not set, a random UUIDv4 will be used.
- **`--group-member-id `** : The bridges are supervising each other via
zenoh liveliness tokens. This option allows to set a custom identifier for the
bridge, that will be used the liveliness token key (if not specified, the zenoh
UUID is used).
- **`--rest-http-port `** : set the REST API http port
(default: 8000)
* DDS-related arguments:
- **`-d, --domain `** : The DDS Domain ID. By default set to `0`, or to
`"$ROS_DOMAIN_ID"` is this environment variable is defined.
- **`--dds-localhost-only`** : If set, the DDS discovery and traffic will
occur only on the localhost interface (127.0.0.1).
By default set to false, unless the "ROS_LOCALHOST_ONLY=1" environment
variable is defined.
- **`--dds-enable-shm`** : If set, DDS will be configured to use shared
memory. Requires the bridge to be built with the 'dds_shm' feature for this
option to valid.
By default set to false.
- **`-f, --fwd-discovery`** : When set, rather than creating a local route
when discovering a local DDS entity, this discovery info is forwarded to the
remote plugins/bridges. Those will create the routes, including a replica of
the discovered entity. More details
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
- **`-s, --scope `** : A string used as prefix to scope DDS traffic
when mapped to zenoh keys.
- **`-a, --allow `** : A regular expression matching the set of
'partition/topic-name' that must be routed via zenoh.
By default, all partitions and topics are allowed.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
Examples of expressions:
- `.*/TopicA` will allow only the `TopicA` to be routed, whatever the
partition.
- `PartitionX/.*` will allow all the topics to be routed, but only on
`PartitionX`.
- `cmd_vel|rosout` will allow only the topics containing `cmd_vel` or
`rosout` in their name or partition name to be routed.
- **`--deny `** : A regular expression matching the set of
'partition/topic-name' that must NOT be routed via zenoh.
By default, no partitions and no topics are denied.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
- **`--max-frequency ...`** : specifies a maximum frequency of data
routing over zenoh per-topic. The string must have the format `"regex=float"`
where:
- `"regex"` is a regular expression matching the set of
'partition/topic-name' for which the data (per DDS instance) must be routedat
no higher rate than associated max frequency (same syntax than --allow option).
- `"float"` is the maximum frequency in Hertz; if publication rate is
higher, downsampling will occur when routing.
.
(usable multiple times)
- **`--queries-timeout `**: A duration in seconds (default: 5.0
sec) that will be used as a timeout when the bridge
queries any other remote bridge for discovery information and for
historical data for TRANSIENT_LOCAL DDS Readers it serves
(i.e. if the query to the remote bridge exceed the timeout, some
historical samples might be not routed to the Readers,
but the route will not be blocked forever).
- **`-w, --generalise-pub `** : A list of key expressions to use
for generalising the declaration of
the zenoh publications, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
- **`-r, --generalise-sub `** : A list of key expressions to use
for generalising the declaration of
the zenoh subscriptions, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
.
## Admin space
.
The zenoh bridge for DDS exposes an administration space allowing to browse the
DDS entities that have been discovered (with their QoS), and the routes that
have been established between DDS and zenoh.
This administration space is accessible via any zenoh API, including the REST
API that you can activate at `zenoh-bridge-dds` startup using the
`--rest-http-port` argument.
.
The `zenoh-bridge-dds` exposes this administration space with paths prefixed by
`@/service//dds` (where `` is the unique identifier of the bridge
instance). The informations are then organized with such paths:
- `@/service//dds/version` : the bridge version
- `@/service//dds/config` : the bridge configuration
- `@/service//dds/participant//reader//` : a discovered
DDS reader on ``
- `@/service//dds/participant//writer//` : a discovered
DDS reader on ``
- `@/service//dds/route/from_dds/` : a route established
from a DDS writer to a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources)).
- `@/service//dds/route/to_dds/` : a route established
from a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources))..
.
Example of queries on administration space using the REST API with the `curl`
command line tool (don't forget to activate the REST API with `--rest-http-port
8000` argument):
- List all the DDS entities that have been discovered:
```bash
curl http://localhost:8000/@/service/**/participant/**
```
- List all established routes:
```bash
curl http://localhost:8000/@/service/**/route/**
```
- List all discovered DDS entities and established route for topic `cmd_vel`:
```bash
curl http://localhost:8000/@/service/**/cmd_vel
```
.
> _Pro tip: pipe the result into [**jq**](https://stedolan.github.io/jq/)
command for JSON pretty print or transformation._
.
## Architecture details
.
Whether it's built as a library or as a standalone executable, the **zenoh
bridge for DDS** do the same things:
- in default mode:
- it discovers the DDS readers and writers declared by any DDS application,
via the standard DDS discovery protocol (that uses UDP multicast)
- it creates a mirror DDS writer or reader for each discovered reader or
writer (using the same QoS)
- if maps the discovered DDS topics and partitions to zenoh keys (see mapping
details below)
- it forwards user's data from a DDS topic to the corresponding zenoh key,
and vice versa
- it does not forward to the remote bridge any DDS discovery information
.
- in "forward discovery" mode
- it behaves as described
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
### _Mapping of DDS topics to zenoh keys_
The mapping between DDS and zenoh is rather straightforward: given a DDS
Reader/Writer for topic **`A`** without the partition QoS set, then the
equivalent zenoh key will have the same name: **`A`**.
If a partition QoS **`P`** is defined, the equivalent zenoh key will be named
as **`P/A`**.
.
Optionally, the bridge can be configured with a **scope** that will be used as
a prefix to each zenoh key.
That is, for scope **`S`** the equivalent zenoh key will be:
- **`S/A`** for a topic **`A`** without partition
- **`S/P/A`** for a topic **`A`** and a partition **`P`**
.
### _Mapping ROS2 names to zenoh keys_
The mapping from ROS2 topics and services name to DDS topics is specified
[here](https://design.ros2.org/articles/topic_and_service_names.html#mapping-of-ros-2-topic-and-service-names-to-dds-concepts).
Notice that ROS2 does not use the DDS partitions.
As a consequence of this mapping and of the DDS to zenoh mapping specified
above, here are some examples of mapping from ROS2 names to zenoh keys:
.
| ROS2 names | DDS Topics names | zenoh keys (no scope) | zenoh keys (if
scope="`myscope`") |
| --- | --- | --- | --- |
| topic: `/rosout` | `rt/rosout` | `rt/rosout` | `myscope/rt/rosout` |
| topic: `/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` |
`myscope/rt/turtle1/cmd_vel` |
| service: `/turtle1/set_pen` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`myscope/rq/turtle1/set_penRequest`
`myscope/rr/turtle1/set_penReply` |
| action: `/turtle1/rotate_absolute` |
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`myscope/rq/turtle1/rotate_absolute/_action/send_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/send_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/get_resultRequest`
`myscope/rr/turtle1/rotate_absolute/_action/get_resultReply`
`myscope/rt/turtle1/rotate_absolute/_action/status`
`myscope/rt/turtle1/rotate_absolute/_action/feedback`
|
| all parameters for node `turtlesim`|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`myscope/rq/turtlesim/list_parametersRequest`
`myscope/rr/turtlesim/list_parametersReply`
`myscope/rq/turtlesim/describe_parametersRequest`
`myscope/rr/turtlesim/describe_parametersReply`
`myscope/rq/turtlesim/get_parametersRequest`
`myscope/rr/turtlesim/get_parametersReply`
`myscope/rr/turtlesim/get_parameter_typesReply`
`myscope/rq/turtlesim/get_parameter_typesRequest`
`myscope/rq/turtlesim/set_parametersRequest`
`myscope/rr/turtlesim/set_parametersReply`
`myscope/rq/turtlesim/set_parameters_atomicallyRequest`
`myscope/rr/turtlesim/set_parameters_atomicallyReply`
|
| specific ROS discovery topic | `ros_discovery_info` | `ros_discovery_info` |
`myscope/ros_discovery_info`
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-plugin-dds
Architecture: arm64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 4005
Depends: zenohd (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-plugin-dds_0.10.0-rc_arm64.deb
Size: 1226436
MD5sum: bf73481c8f88b053eaaba2d598272212
SHA1: 75ae7d5fca6392db125eb14f11173c467cd10049
SHA256: efc316c241f0908999350f2a811c84c4ec04afa2fa5a6fa38c07e205b0ae2667
SHA512: f8838d3aafea1a1bb38d15e1dbc6ccf4037f4139a202f5913f1e2c4cbc88442fa3ae62807b2c224610a468d1905236eb013adc35abe2f8c8b7bc0af52396b1ec
Homepage: http://zenoh.io
Description: Zenoh plugin for ROS2 and DDS in general
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# DDS plugin and standalone `zenoh-bridge-dds`
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Docker image:** see [below](#Docker-image)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
## Background
The Data Distribution Service (DDS) is a standard for data-centric publish
subscribe. Whilst DDS has been around for quite some time and has a long
history of deployments in various industries, it has recently gained quite a
bit of attentions thanks to its adoption by the Robotic Operating System (ROS2)
-- where it is used for communication between ROS2 nodes.
.
## Robot Swarms and Edge Robotics
As mentioned above, ROS2 has adopted DDS as the mechanism to exchange data
between nodes within and potentially across a robot. That said, due to some of
the very core assumptions at the foundations of the DDS wire-protocol, beside
the fact that it leverages UDP/IP multicast for communication, it is not so
straightforward to scale DDS communication over a WAN or across multiple LANs.
Zenoh, on the other hand was designed since its inception to operate at
Internet Scale.
.

.
Thus, the main motivations to have a **DDS plugin** for **Eclipse zenoh** are:
.
- Facilitate the interconnection of robot swarms.
- Support use cases of edge robotics.
- Give the possibility to use **zenoh**'s geo-distributed storage and query
system to better manage robot's data.
.
As any plugin for Eclipse zenoh, it can be dynamically loaded by a zenoh
router, at startup or at runtime.
In addition, this project also provides a standalone version of this plugin as
an executable binary named `zenoh-bridge-dds`.
.
## How to install it
.
To install the latest release of either the DDS plugin for the Zenoh router,
either the `zenoh-bridge-dds` standalone executable, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-plugin-dds/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download:
- the `zenoh-plugin-dds--.zip` file for the plugin.
Then unzip it in the same directory than `zenohd` or to any directory where
it can find the plugin library (e.g. /usr/lib)
- the `zenoh-bridge-dds--.zip` file for the standalone
executable.
Then unzip it where you want, and run the extracted `zenoh-bridge-dds`
binary.
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
```
Then either:
- install the plugin with: `sudo apt install zenoh-plugin-dds`.
- install the standalone executable with: `sudo apt install
zenoh-bridge-dds`.
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
plugins should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
In order to build the zenoh bridge for DDS you need first to install the
following dependencies:
.
- [Rust](https://www.rust-lang.org/tools/install). If you already have the Rust
toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
- On Linux, make sure the `llvm` and `clang` development packages are
installed:
- on Debians do: `sudo apt install llvm-dev libclang-dev`
- on CentOS or RHEL do: `sudo yum install llvm-devel clang-devel`
- on Alpine do: `apk install llvm11-dev clang-dev`
- [CMake](https://cmake.org/download/) (to build CycloneDDS which is a native
dependency)
.
Once these dependencies are in place, you may clone the repository on your
machine:
.
```bash
$ git clone https://github.com/eclipse-zenoh/zenoh-plugin-dds.git
$ cd zenoh-plugin-dds
```
> :warning: **WARNING** :warning: : On Linux, don't use `cargo build` command
without specifying a package with `-p`. Building both `zenoh-plugin-dds`
(plugin library) and `zenoh-bridge-dds` (standalone executable) together will
lead to a `multiple definition of `load_plugin'` error at link time. See
[#117](https://github.com/eclipse-zenoh/zenoh-plugin-dds/issues/117#issuecomment-1439694331)
for explanations.
.
You can then choose between building the zenoh bridge for DDS:
- as a plugin library that can be dynamically loaded by the zenoh router
(`zenohd`):
```bash
$ cargo build --release -p zenoh-plugin-dds
```
The plugin shared library (`*.so` on Linux, `*.dylib` on Mac OS, `*.dll` on
Windows) will be generated in the `target/release` subdirectory.
.
- or as a standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds
```
The **`zenoh-bridge-dds`** binary will be generated in the `target/release`
sub-directory.
.
.
### Enabling Cyclone DDS Shared Memory Support
.
Cyclone DDS Shared memory support is provided by the [Iceoryx
library](https://iceoryx.io/). Iceoryx introduces additional system
requirements which are documented
[here](https://iceoryx.io/v2.0.1/getting-started/installation/#dependencies).
.
To build the zenoh bridge for DDS with support for shared memory the `dds_shm`
optional feature must be enabled during the build process as follows:
- plugin library:
```bash
$ cargo build --release -p zenoh-plugin-dds --features dds_shm
```
.
- standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds --features dds_shm
```
.
**Note:** Iceoryx does not need to be installed to build the bridge when the
`dds_shm` feature is enabled. Iceoryx will be automatically downloaded,
compiled, and statically linked into the zenoh bridge as part of the cargo
build process.
.
When the zenoh bridge is configured to use DDS shared memory (see
[Configuration](#configuration)) the **Iceoryx RouDi daemon (`iox-roudi`)**
must be running in order for the bridge to start successfully. If not started
the bridge will wait for a period of time for the daemon to become available
before timing out and terminating.
.
When building the zenoh bridge with the `dds_shm` feature enabled the
`iox-roudi` daemon is also built for convenience. The daemon can be found under
`target/debug|release/build/cyclors-/out/iceoryx-build/bin/iox-roudi`.
.
See
[here](https://cyclonedds.io/docs/cyclonedds/latest/shared_memory/shared_memory.html)
for more details of shared memory support in Cyclone DDS.
.
.
### ROS2 package
If you're a ROS2 user, you can also build `zenoh-bridge-dds` as a ROS package
running:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release
```
The `rosdep` command will automatically install *Rust* and *clang* as build
dependencies.
.
If you want to cross-compile the package on x86 device for any target, you can
use the following command:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release --cmake-args -DCROSS_ARCH=
```
where `` is the target architecture (e.g. `aarch64-unknown-linux-gnu`).
The architechture list can be found
[here](https://doc.rust-lang.org/nightly/rustc/platform-support.html).
.
The cross-compilation uses `zig` as a linker. You can install it with
instructions in [here](https://ziglang.org/download/). Also, the `zigbuild`
package is required to be installed on the target device. You can install it
with instructions in
[here](https://github.com/rust-cross/cargo-zigbuild#installation).
.
## Docker image
The **`zenoh-bridge-dds`** standalone executable is also available as a [Docker
images](https://hub.docker.com/r/eclipse/zenoh-bridge-dds/tags?page=1&ordering=last_updated)
for both amd64 and arm64. To get it, do:
- `docker pull eclipse/zenoh-bridge-dds:latest` for the latest release
- `docker pull eclipse/zenoh-bridge-dds:master` for the master branch version
(nightly build)
.
:warning: **However, notice that it's usage is limited to Docker on Linux and
using the `--net host` option.**
The cause being that DDS uses UDP multicast and Docker doesn't support UDP
multicast between a container and its host (see cases
[moby/moby#23659](https://github.com/moby/moby/issues/23659),
[moby/libnetwork#2397](https://github.com/moby/libnetwork/issues/2397) or
[moby/libnetwork#552](https://github.com/moby/libnetwork/issues/552)). The only
known way to make it work is to use the `--net host` option that is [only
supported on Linux hosts](https://docs.docker.com/network/host/).
.
Usage: **`docker run --init --net host eclipse/zenoh-bridge-dds`**
It supports the same command line arguments than the `zenoh-bridge-dds` (see
below or check with `-h` argument).
.
## For a quick test with ROS2 turtlesim
Prerequisites:
- A [ROS2 environment](http://docs.ros.org/en/galactic/Installation.html) (no
matter the DDS implementation as soon as it implements the standard DDSI
protocol - the default [Eclipse
CycloneDDS](https://github.com/eclipse-cyclonedds/cyclonedds) being just fine)
- The [turtlesim
package](http://docs.ros.org/en/galactic/Tutorials/Turtlesim/Introducing-Turtlesim.html#install-turtlesim)
.
### _1 host, 2 ROS domains_
For a quick test on a single host, you can run the `turtlesim_node` and the
`turtle_teleop_key` on distinct ROS domains. As soon as you run 2
`zenoh-bridge-dds` (1 per domain) the `turtle_teleop_key` can drive the
`turtlesim_node`.
Here are the commands to run:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 1`
- `./target/release/zenoh-bridge-dds -d 2`
.
Notice that by default the 2 bridges will discover each other using UDP
multicast.
.
### _2 hosts, avoiding UDP multicast communication_
By default DDS (and thus ROS2) uses UDP multicast for discovery and
publications. But on some networks, UDP multicast is not or badly supported.
In such cases, deploying the `zenoh-bridge-dds` on both hosts will make it to:
- limit the DDS discovery traffic, as detailled in [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
- route all the DDS publications made on UDP multicast by each node through
the zenoh protocol that by default uses TCP.
.
Here are the commands to test this configuration with turtlesim:
- on host 1:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -d 1 -l tcp/0.0.0.0:7447`
- on host 2:
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 2 -e tcp/:7447` - where
`` is the IP of host 1
.
Notice that to avoid unwanted direct DDS communication, 2 disctinct ROS domains
are still used.
.
### _2 hosts, with an intermediate zenoh router in the cloud_
In case your 2 hosts can't have a point-to-point communication, you could
leverage a [zenoh
router](https://github.com/eclipse-zenoh/zenoh#how-to-build-it) deployed in a
cloud instance (any Linux VM will do the job). You just need to configure your
cloud instanse with a public IP and authorize the TCP port **7447**.
.
:warning: the zenoh protocol is still under development leading to possible
incompatibilities between the bridge and the router if their zenoh version
differ. Please make sure you use a zenoh router built from a recent commit id
from its `master` branch.
.
Here are the commands to test this configuration with turtlesim:
- on cloud VM:
- `zenohd`
- on host 1:
- `ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
- on host 2:
- `ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
.
Notice that there is no need to use distinct ROS domain here, since the 2 hosts
are not supposed to directly communicate with each other.
.
## More advanced usage for ROS2
### _Full support of ROS graph and topic lists via the forward discovery mode_
By default the bridge doesn't route throught zenoh the DDS discovery traffic to
the remote bridges.
Meaning that, in case you use 2 **`zenoh-bridge-dds`** to interconnect 2 DDS
domains, the DDS entities discovered in one domain won't be advertised in the
other domain. Thus, the DDS data will be routed between the 2 domains only if
matching readers and writers are declared in the 2 domains independently.
.
This default behaviour has an impact on ROS2 behaviour: on one side of the
bridge the ROS graph might not reflect all the nodes from the other side of the
bridge. The `ros2 topic list` command might not list all the topics declared on
the other side. And the **ROS graph** is limited to the nodes in each domain.
.
But using the **`--fwd-discovery`** (or `-f`) option for all bridges make them
behave differently:
- each bridge will forward via zenoh the local DDS discovery data to the
remote bridges (in a more compact way than the original DDS discovery traffic)
- each bridge receiving DDS discovery data via zenoh will create a replica of
the DDS reader or writer, with similar QoS. Those replicas will serve the route
to/from zenoh, and will be discovered by the ROS2 nodes.
- each bridge will forward the `ros_discovery_info` data (in a less intensive
way than the original publications) to the remote bridges. On reception, the
remote bridges will convert the original entities' GIDs into the GIDs of the
corresponding replicas, and re-publish on DDS the `ros_discovery_info`. The
full ROS graph can then be discovered by the ROS2 nodes on each host.
### _Limiting the ROS2 topics, services, parameters or actions to be routed_
By default 2 zenoh bridges will route all ROS2 topics and services for which
they detect a Writer on one side and a Reader on the other side. But you might
want to avoid some topics and services to be routed by the bridge.
.
Starting `zenoh-bridge-dds` you can use the `--allow` argument to specify the
subset of topics and services that will be routed by the bridge. This argument
accepts a string wich is a regular expression that must match a substring of an
allowed zenoh key (see details of [mapping of ROS2 names to zenoh
keys](#mapping-ros2-names-to-zenoh-keys)).
.
Here are some examples of usage:
| `--allow` value | allowed ROS2 communication |
| :-- | :-- |
| `/rosout` | `/rosout`|
| `/rosout\|/turtle1/cmd_vel\|/turtle1/rotate_absolute` |
`/rosout`
`/turtle1/cmd_vel`
`/turtle1/rotate_absolute` |
| `/rosout\|/turtle1/` | `/rosout` and all `/turtle1` topics, services,
parameters and actions |
| `/turtle1/.*` | all topics and services with name containing `/turtle1/` |
| `/turtle1/` | same: all topics, services, parameters and actions with name
containing `/turtle1/` |
| `rt/turtle1` | all topics with name containing `/turtle1` (no services,
parameters or actions) |
| `rq/turtle1\|/rr/turtle1` | all services and parameters with name containing
`/turtle1` (no topics or actions) |
| `rq/turtlesim/.*parameter\|/rr/turtlesim/.*parameter` | all parameters with
name containing `/turtlesim` (no topics, services or actions) |
| `rq/turtle1/.*/_action\|/rr/turtle1/.*/_action` | all actions with name
containing `/turtle1` (no topics, services or parameters) |
.
### _Running several robots without changing the ROS2 configuration_
If you run similar robots in the same network, they will by default all us the
same DDS topics, leading to interferences in their operations.
A simple way to address this issue using the zenoh bridge is to:
- deploy 1 zenoh bridge per robot
- have each bridge started with the `--scope "/"` argument, each robot
having its own id.
- make sure each robot cannot directly communicate via DDS with another robot
by setting a distinct domain per robot, or configuring its network interface to
not route UDP multicast outside the host.
.
Using the `--scope` option, a prefix is added to each zenoh key
published/subscribed by the bridge (more details in [mapping of ROS2 names to
zenoh keys](#mapping-ros2-names-to-zenoh-keys)). To interact with a robot, a
remote ROS2 application must use a zenoh bridge configured with the same scope
than the robot.
.
### _Closer integration of ROS2 with zenoh_
As you understood, using the zenoh bridge, each ROS2 publications and
subscriptions are mapped to a zenoh key. Therefore, its relatively easy to
develop an application using one of the [zenoh
APIs](https://zenoh.io/docs/apis/apis/) to interact with one or more robot at
the same time.
.
See in details how to achieve that in [this
blog](https://zenoh.io/blog/2021-04-28-ros2-integration/).
.
## Configuration
.
`zenoh-bridge-dds` can be configured via a JSON5 file passed via the
`-c`argument. You can see a commented example of such configuration file:
[`DEFAULT_CONFIG.json5`](DEFAULT_CONFIG.json5).
.
The `"dds"` part of this same configuration file can also be used in the
configuration file for the zenoh router (within its `"plugins"` part). The
router will automatically try to load the plugin library (`zenoh-plugin_dds`)
at startup and apply its configuration.
.
`zenoh-bridge-dds` also accepts the following arguments. If set, each argument
will override the similar setting from the configuration file:
* zenoh-related arguments:
- **`-c, --config `** : a config file
- **`-m, --mode `** : The zenoh session mode. Default: `peer` Possible
values: `peer` or `client`.
See [zenoh
documentation](https://zenoh.io/docs/getting-started/key-concepts/#deployment-units)
for more details.
- **`-l, --listen `** : A locator on which this router will listen
for incoming sessions. Repeat this option to open several listeners. Example of
locator: `tcp/localhost:7447`.
- **`-e, --peer `** : A peer locator this router will try to
connect to (typically another bridge or a zenoh router). Repeat this option to
connect to several peers. Example of locator: `tcp/:7447`.
- **`--no-multicast-scouting`** : disable the zenoh scouting protocol that
allows automatic discovery of zenoh peers and routers.
- **`-i, --id `** : The identifier (as an hexadecimal string -
e.g.: 0A0B23...) that the zenoh bridge must use. **WARNING: this identifier
must be unique in the system!** If not set, a random UUIDv4 will be used.
- **`--group-member-id `** : The bridges are supervising each other via
zenoh liveliness tokens. This option allows to set a custom identifier for the
bridge, that will be used the liveliness token key (if not specified, the zenoh
UUID is used).
- **`--rest-http-port `** : set the REST API http port
(default: 8000)
* DDS-related arguments:
- **`-d, --domain `** : The DDS Domain ID. By default set to `0`, or to
`"$ROS_DOMAIN_ID"` is this environment variable is defined.
- **`--dds-localhost-only`** : If set, the DDS discovery and traffic will
occur only on the localhost interface (127.0.0.1).
By default set to false, unless the "ROS_LOCALHOST_ONLY=1" environment
variable is defined.
- **`--dds-enable-shm`** : If set, DDS will be configured to use shared
memory. Requires the bridge to be built with the 'dds_shm' feature for this
option to valid.
By default set to false.
- **`-f, --fwd-discovery`** : When set, rather than creating a local route
when discovering a local DDS entity, this discovery info is forwarded to the
remote plugins/bridges. Those will create the routes, including a replica of
the discovered entity. More details
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
- **`-s, --scope `** : A string used as prefix to scope DDS traffic
when mapped to zenoh keys.
- **`-a, --allow `** : A regular expression matching the set of
'partition/topic-name' that must be routed via zenoh.
By default, all partitions and topics are allowed.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
Examples of expressions:
- `.*/TopicA` will allow only the `TopicA` to be routed, whatever the
partition.
- `PartitionX/.*` will allow all the topics to be routed, but only on
`PartitionX`.
- `cmd_vel|rosout` will allow only the topics containing `cmd_vel` or
`rosout` in their name or partition name to be routed.
- **`--deny `** : A regular expression matching the set of
'partition/topic-name' that must NOT be routed via zenoh.
By default, no partitions and no topics are denied.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
- **`--max-frequency ...`** : specifies a maximum frequency of data
routing over zenoh per-topic. The string must have the format `"regex=float"`
where:
- `"regex"` is a regular expression matching the set of
'partition/topic-name' for which the data (per DDS instance) must be routedat
no higher rate than associated max frequency (same syntax than --allow option).
- `"float"` is the maximum frequency in Hertz; if publication rate is
higher, downsampling will occur when routing.
.
(usable multiple times)
- **`--queries-timeout `**: A duration in seconds (default: 5.0
sec) that will be used as a timeout when the bridge
queries any other remote bridge for discovery information and for
historical data for TRANSIENT_LOCAL DDS Readers it serves
(i.e. if the query to the remote bridge exceed the timeout, some
historical samples might be not routed to the Readers,
but the route will not be blocked forever).
- **`-w, --generalise-pub `** : A list of key expressions to use
for generalising the declaration of
the zenoh publications, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
- **`-r, --generalise-sub `** : A list of key expressions to use
for generalising the declaration of
the zenoh subscriptions, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
.
## Admin space
.
The zenoh bridge for DDS exposes an administration space allowing to browse the
DDS entities that have been discovered (with their QoS), and the routes that
have been established between DDS and zenoh.
This administration space is accessible via any zenoh API, including the REST
API that you can activate at `zenoh-bridge-dds` startup using the
`--rest-http-port` argument.
.
The `zenoh-bridge-dds` exposes this administration space with paths prefixed by
`@/service//dds` (where `` is the unique identifier of the bridge
instance). The informations are then organized with such paths:
- `@/service//dds/version` : the bridge version
- `@/service//dds/config` : the bridge configuration
- `@/service//dds/participant//reader//` : a discovered
DDS reader on ``
- `@/service//dds/participant//writer//` : a discovered
DDS reader on ``
- `@/service//dds/route/from_dds/` : a route established
from a DDS writer to a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources)).
- `@/service//dds/route/to_dds/` : a route established
from a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources))..
.
Example of queries on administration space using the REST API with the `curl`
command line tool (don't forget to activate the REST API with `--rest-http-port
8000` argument):
- List all the DDS entities that have been discovered:
```bash
curl http://localhost:8000/@/service/**/participant/**
```
- List all established routes:
```bash
curl http://localhost:8000/@/service/**/route/**
```
- List all discovered DDS entities and established route for topic `cmd_vel`:
```bash
curl http://localhost:8000/@/service/**/cmd_vel
```
.
> _Pro tip: pipe the result into [**jq**](https://stedolan.github.io/jq/)
command for JSON pretty print or transformation._
.
## Architecture details
.
Whether it's built as a library or as a standalone executable, the **zenoh
bridge for DDS** do the same things:
- in default mode:
- it discovers the DDS readers and writers declared by any DDS application,
via the standard DDS discovery protocol (that uses UDP multicast)
- it creates a mirror DDS writer or reader for each discovered reader or
writer (using the same QoS)
- if maps the discovered DDS topics and partitions to zenoh keys (see mapping
details below)
- it forwards user's data from a DDS topic to the corresponding zenoh key,
and vice versa
- it does not forward to the remote bridge any DDS discovery information
.
- in "forward discovery" mode
- it behaves as described
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
### _Mapping of DDS topics to zenoh keys_
The mapping between DDS and zenoh is rather straightforward: given a DDS
Reader/Writer for topic **`A`** without the partition QoS set, then the
equivalent zenoh key will have the same name: **`A`**.
If a partition QoS **`P`** is defined, the equivalent zenoh key will be named
as **`P/A`**.
.
Optionally, the bridge can be configured with a **scope** that will be used as
a prefix to each zenoh key.
That is, for scope **`S`** the equivalent zenoh key will be:
- **`S/A`** for a topic **`A`** without partition
- **`S/P/A`** for a topic **`A`** and a partition **`P`**
.
### _Mapping ROS2 names to zenoh keys_
The mapping from ROS2 topics and services name to DDS topics is specified
[here](https://design.ros2.org/articles/topic_and_service_names.html#mapping-of-ros-2-topic-and-service-names-to-dds-concepts).
Notice that ROS2 does not use the DDS partitions.
As a consequence of this mapping and of the DDS to zenoh mapping specified
above, here are some examples of mapping from ROS2 names to zenoh keys:
.
| ROS2 names | DDS Topics names | zenoh keys (no scope) | zenoh keys (if
scope="`myscope`") |
| --- | --- | --- | --- |
| topic: `/rosout` | `rt/rosout` | `rt/rosout` | `myscope/rt/rosout` |
| topic: `/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` |
`myscope/rt/turtle1/cmd_vel` |
| service: `/turtle1/set_pen` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`myscope/rq/turtle1/set_penRequest`
`myscope/rr/turtle1/set_penReply` |
| action: `/turtle1/rotate_absolute` |
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`myscope/rq/turtle1/rotate_absolute/_action/send_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/send_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/get_resultRequest`
`myscope/rr/turtle1/rotate_absolute/_action/get_resultReply`
`myscope/rt/turtle1/rotate_absolute/_action/status`
`myscope/rt/turtle1/rotate_absolute/_action/feedback`
|
| all parameters for node `turtlesim`|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`myscope/rq/turtlesim/list_parametersRequest`
`myscope/rr/turtlesim/list_parametersReply`
`myscope/rq/turtlesim/describe_parametersRequest`
`myscope/rr/turtlesim/describe_parametersReply`
`myscope/rq/turtlesim/get_parametersRequest`
`myscope/rr/turtlesim/get_parametersReply`
`myscope/rr/turtlesim/get_parameter_typesReply`
`myscope/rq/turtlesim/get_parameter_typesRequest`
`myscope/rq/turtlesim/set_parametersRequest`
`myscope/rr/turtlesim/set_parametersReply`
`myscope/rq/turtlesim/set_parameters_atomicallyRequest`
`myscope/rr/turtlesim/set_parameters_atomicallyReply`
|
| specific ROS discovery topic | `ros_discovery_info` | `ros_discovery_info` |
`myscope/ros_discovery_info`
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-plugin-dds
Architecture: armel
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 3888
Depends: zenohd (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-plugin-dds_0.10.0-rc_armel.deb
Size: 1208656
MD5sum: 49e5d71d368745b6791548e65e151d66
SHA1: 1f5122443e032578fecd13864856ff8fb92e16eb
SHA256: 6e1f714b2ecf9bfc1b333b3c074e19a76fa1b96d60f96ba6690a0ce97d3dc0e1
SHA512: c58557c8d72760fe1581c34d02de966e925f859984b91887c82c13d2881e892eee2975f25186c99e80cf2d35025a004098938cf59a148399a05e715a8bbe9f81
Homepage: http://zenoh.io
Description: Zenoh plugin for ROS2 and DDS in general
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# DDS plugin and standalone `zenoh-bridge-dds`
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Docker image:** see [below](#Docker-image)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
## Background
The Data Distribution Service (DDS) is a standard for data-centric publish
subscribe. Whilst DDS has been around for quite some time and has a long
history of deployments in various industries, it has recently gained quite a
bit of attentions thanks to its adoption by the Robotic Operating System (ROS2)
-- where it is used for communication between ROS2 nodes.
.
## Robot Swarms and Edge Robotics
As mentioned above, ROS2 has adopted DDS as the mechanism to exchange data
between nodes within and potentially across a robot. That said, due to some of
the very core assumptions at the foundations of the DDS wire-protocol, beside
the fact that it leverages UDP/IP multicast for communication, it is not so
straightforward to scale DDS communication over a WAN or across multiple LANs.
Zenoh, on the other hand was designed since its inception to operate at
Internet Scale.
.

.
Thus, the main motivations to have a **DDS plugin** for **Eclipse zenoh** are:
.
- Facilitate the interconnection of robot swarms.
- Support use cases of edge robotics.
- Give the possibility to use **zenoh**'s geo-distributed storage and query
system to better manage robot's data.
.
As any plugin for Eclipse zenoh, it can be dynamically loaded by a zenoh
router, at startup or at runtime.
In addition, this project also provides a standalone version of this plugin as
an executable binary named `zenoh-bridge-dds`.
.
## How to install it
.
To install the latest release of either the DDS plugin for the Zenoh router,
either the `zenoh-bridge-dds` standalone executable, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-plugin-dds/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download:
- the `zenoh-plugin-dds--.zip` file for the plugin.
Then unzip it in the same directory than `zenohd` or to any directory where
it can find the plugin library (e.g. /usr/lib)
- the `zenoh-bridge-dds--.zip` file for the standalone
executable.
Then unzip it where you want, and run the extracted `zenoh-bridge-dds`
binary.
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
```
Then either:
- install the plugin with: `sudo apt install zenoh-plugin-dds`.
- install the standalone executable with: `sudo apt install
zenoh-bridge-dds`.
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
plugins should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
In order to build the zenoh bridge for DDS you need first to install the
following dependencies:
.
- [Rust](https://www.rust-lang.org/tools/install). If you already have the Rust
toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
- On Linux, make sure the `llvm` and `clang` development packages are
installed:
- on Debians do: `sudo apt install llvm-dev libclang-dev`
- on CentOS or RHEL do: `sudo yum install llvm-devel clang-devel`
- on Alpine do: `apk install llvm11-dev clang-dev`
- [CMake](https://cmake.org/download/) (to build CycloneDDS which is a native
dependency)
.
Once these dependencies are in place, you may clone the repository on your
machine:
.
```bash
$ git clone https://github.com/eclipse-zenoh/zenoh-plugin-dds.git
$ cd zenoh-plugin-dds
```
> :warning: **WARNING** :warning: : On Linux, don't use `cargo build` command
without specifying a package with `-p`. Building both `zenoh-plugin-dds`
(plugin library) and `zenoh-bridge-dds` (standalone executable) together will
lead to a `multiple definition of `load_plugin'` error at link time. See
[#117](https://github.com/eclipse-zenoh/zenoh-plugin-dds/issues/117#issuecomment-1439694331)
for explanations.
.
You can then choose between building the zenoh bridge for DDS:
- as a plugin library that can be dynamically loaded by the zenoh router
(`zenohd`):
```bash
$ cargo build --release -p zenoh-plugin-dds
```
The plugin shared library (`*.so` on Linux, `*.dylib` on Mac OS, `*.dll` on
Windows) will be generated in the `target/release` subdirectory.
.
- or as a standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds
```
The **`zenoh-bridge-dds`** binary will be generated in the `target/release`
sub-directory.
.
.
### Enabling Cyclone DDS Shared Memory Support
.
Cyclone DDS Shared memory support is provided by the [Iceoryx
library](https://iceoryx.io/). Iceoryx introduces additional system
requirements which are documented
[here](https://iceoryx.io/v2.0.1/getting-started/installation/#dependencies).
.
To build the zenoh bridge for DDS with support for shared memory the `dds_shm`
optional feature must be enabled during the build process as follows:
- plugin library:
```bash
$ cargo build --release -p zenoh-plugin-dds --features dds_shm
```
.
- standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds --features dds_shm
```
.
**Note:** Iceoryx does not need to be installed to build the bridge when the
`dds_shm` feature is enabled. Iceoryx will be automatically downloaded,
compiled, and statically linked into the zenoh bridge as part of the cargo
build process.
.
When the zenoh bridge is configured to use DDS shared memory (see
[Configuration](#configuration)) the **Iceoryx RouDi daemon (`iox-roudi`)**
must be running in order for the bridge to start successfully. If not started
the bridge will wait for a period of time for the daemon to become available
before timing out and terminating.
.
When building the zenoh bridge with the `dds_shm` feature enabled the
`iox-roudi` daemon is also built for convenience. The daemon can be found under
`target/debug|release/build/cyclors-/out/iceoryx-build/bin/iox-roudi`.
.
See
[here](https://cyclonedds.io/docs/cyclonedds/latest/shared_memory/shared_memory.html)
for more details of shared memory support in Cyclone DDS.
.
.
### ROS2 package
If you're a ROS2 user, you can also build `zenoh-bridge-dds` as a ROS package
running:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release
```
The `rosdep` command will automatically install *Rust* and *clang* as build
dependencies.
.
If you want to cross-compile the package on x86 device for any target, you can
use the following command:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release --cmake-args -DCROSS_ARCH=
```
where `` is the target architecture (e.g. `aarch64-unknown-linux-gnu`).
The architechture list can be found
[here](https://doc.rust-lang.org/nightly/rustc/platform-support.html).
.
The cross-compilation uses `zig` as a linker. You can install it with
instructions in [here](https://ziglang.org/download/). Also, the `zigbuild`
package is required to be installed on the target device. You can install it
with instructions in
[here](https://github.com/rust-cross/cargo-zigbuild#installation).
.
## Docker image
The **`zenoh-bridge-dds`** standalone executable is also available as a [Docker
images](https://hub.docker.com/r/eclipse/zenoh-bridge-dds/tags?page=1&ordering=last_updated)
for both amd64 and arm64. To get it, do:
- `docker pull eclipse/zenoh-bridge-dds:latest` for the latest release
- `docker pull eclipse/zenoh-bridge-dds:master` for the master branch version
(nightly build)
.
:warning: **However, notice that it's usage is limited to Docker on Linux and
using the `--net host` option.**
The cause being that DDS uses UDP multicast and Docker doesn't support UDP
multicast between a container and its host (see cases
[moby/moby#23659](https://github.com/moby/moby/issues/23659),
[moby/libnetwork#2397](https://github.com/moby/libnetwork/issues/2397) or
[moby/libnetwork#552](https://github.com/moby/libnetwork/issues/552)). The only
known way to make it work is to use the `--net host` option that is [only
supported on Linux hosts](https://docs.docker.com/network/host/).
.
Usage: **`docker run --init --net host eclipse/zenoh-bridge-dds`**
It supports the same command line arguments than the `zenoh-bridge-dds` (see
below or check with `-h` argument).
.
## For a quick test with ROS2 turtlesim
Prerequisites:
- A [ROS2 environment](http://docs.ros.org/en/galactic/Installation.html) (no
matter the DDS implementation as soon as it implements the standard DDSI
protocol - the default [Eclipse
CycloneDDS](https://github.com/eclipse-cyclonedds/cyclonedds) being just fine)
- The [turtlesim
package](http://docs.ros.org/en/galactic/Tutorials/Turtlesim/Introducing-Turtlesim.html#install-turtlesim)
.
### _1 host, 2 ROS domains_
For a quick test on a single host, you can run the `turtlesim_node` and the
`turtle_teleop_key` on distinct ROS domains. As soon as you run 2
`zenoh-bridge-dds` (1 per domain) the `turtle_teleop_key` can drive the
`turtlesim_node`.
Here are the commands to run:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 1`
- `./target/release/zenoh-bridge-dds -d 2`
.
Notice that by default the 2 bridges will discover each other using UDP
multicast.
.
### _2 hosts, avoiding UDP multicast communication_
By default DDS (and thus ROS2) uses UDP multicast for discovery and
publications. But on some networks, UDP multicast is not or badly supported.
In such cases, deploying the `zenoh-bridge-dds` on both hosts will make it to:
- limit the DDS discovery traffic, as detailled in [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
- route all the DDS publications made on UDP multicast by each node through
the zenoh protocol that by default uses TCP.
.
Here are the commands to test this configuration with turtlesim:
- on host 1:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -d 1 -l tcp/0.0.0.0:7447`
- on host 2:
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 2 -e tcp/:7447` - where
`` is the IP of host 1
.
Notice that to avoid unwanted direct DDS communication, 2 disctinct ROS domains
are still used.
.
### _2 hosts, with an intermediate zenoh router in the cloud_
In case your 2 hosts can't have a point-to-point communication, you could
leverage a [zenoh
router](https://github.com/eclipse-zenoh/zenoh#how-to-build-it) deployed in a
cloud instance (any Linux VM will do the job). You just need to configure your
cloud instanse with a public IP and authorize the TCP port **7447**.
.
:warning: the zenoh protocol is still under development leading to possible
incompatibilities between the bridge and the router if their zenoh version
differ. Please make sure you use a zenoh router built from a recent commit id
from its `master` branch.
.
Here are the commands to test this configuration with turtlesim:
- on cloud VM:
- `zenohd`
- on host 1:
- `ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
- on host 2:
- `ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
.
Notice that there is no need to use distinct ROS domain here, since the 2 hosts
are not supposed to directly communicate with each other.
.
## More advanced usage for ROS2
### _Full support of ROS graph and topic lists via the forward discovery mode_
By default the bridge doesn't route throught zenoh the DDS discovery traffic to
the remote bridges.
Meaning that, in case you use 2 **`zenoh-bridge-dds`** to interconnect 2 DDS
domains, the DDS entities discovered in one domain won't be advertised in the
other domain. Thus, the DDS data will be routed between the 2 domains only if
matching readers and writers are declared in the 2 domains independently.
.
This default behaviour has an impact on ROS2 behaviour: on one side of the
bridge the ROS graph might not reflect all the nodes from the other side of the
bridge. The `ros2 topic list` command might not list all the topics declared on
the other side. And the **ROS graph** is limited to the nodes in each domain.
.
But using the **`--fwd-discovery`** (or `-f`) option for all bridges make them
behave differently:
- each bridge will forward via zenoh the local DDS discovery data to the
remote bridges (in a more compact way than the original DDS discovery traffic)
- each bridge receiving DDS discovery data via zenoh will create a replica of
the DDS reader or writer, with similar QoS. Those replicas will serve the route
to/from zenoh, and will be discovered by the ROS2 nodes.
- each bridge will forward the `ros_discovery_info` data (in a less intensive
way than the original publications) to the remote bridges. On reception, the
remote bridges will convert the original entities' GIDs into the GIDs of the
corresponding replicas, and re-publish on DDS the `ros_discovery_info`. The
full ROS graph can then be discovered by the ROS2 nodes on each host.
### _Limiting the ROS2 topics, services, parameters or actions to be routed_
By default 2 zenoh bridges will route all ROS2 topics and services for which
they detect a Writer on one side and a Reader on the other side. But you might
want to avoid some topics and services to be routed by the bridge.
.
Starting `zenoh-bridge-dds` you can use the `--allow` argument to specify the
subset of topics and services that will be routed by the bridge. This argument
accepts a string wich is a regular expression that must match a substring of an
allowed zenoh key (see details of [mapping of ROS2 names to zenoh
keys](#mapping-ros2-names-to-zenoh-keys)).
.
Here are some examples of usage:
| `--allow` value | allowed ROS2 communication |
| :-- | :-- |
| `/rosout` | `/rosout`|
| `/rosout\|/turtle1/cmd_vel\|/turtle1/rotate_absolute` |
`/rosout`
`/turtle1/cmd_vel`
`/turtle1/rotate_absolute` |
| `/rosout\|/turtle1/` | `/rosout` and all `/turtle1` topics, services,
parameters and actions |
| `/turtle1/.*` | all topics and services with name containing `/turtle1/` |
| `/turtle1/` | same: all topics, services, parameters and actions with name
containing `/turtle1/` |
| `rt/turtle1` | all topics with name containing `/turtle1` (no services,
parameters or actions) |
| `rq/turtle1\|/rr/turtle1` | all services and parameters with name containing
`/turtle1` (no topics or actions) |
| `rq/turtlesim/.*parameter\|/rr/turtlesim/.*parameter` | all parameters with
name containing `/turtlesim` (no topics, services or actions) |
| `rq/turtle1/.*/_action\|/rr/turtle1/.*/_action` | all actions with name
containing `/turtle1` (no topics, services or parameters) |
.
### _Running several robots without changing the ROS2 configuration_
If you run similar robots in the same network, they will by default all us the
same DDS topics, leading to interferences in their operations.
A simple way to address this issue using the zenoh bridge is to:
- deploy 1 zenoh bridge per robot
- have each bridge started with the `--scope "/"` argument, each robot
having its own id.
- make sure each robot cannot directly communicate via DDS with another robot
by setting a distinct domain per robot, or configuring its network interface to
not route UDP multicast outside the host.
.
Using the `--scope` option, a prefix is added to each zenoh key
published/subscribed by the bridge (more details in [mapping of ROS2 names to
zenoh keys](#mapping-ros2-names-to-zenoh-keys)). To interact with a robot, a
remote ROS2 application must use a zenoh bridge configured with the same scope
than the robot.
.
### _Closer integration of ROS2 with zenoh_
As you understood, using the zenoh bridge, each ROS2 publications and
subscriptions are mapped to a zenoh key. Therefore, its relatively easy to
develop an application using one of the [zenoh
APIs](https://zenoh.io/docs/apis/apis/) to interact with one or more robot at
the same time.
.
See in details how to achieve that in [this
blog](https://zenoh.io/blog/2021-04-28-ros2-integration/).
.
## Configuration
.
`zenoh-bridge-dds` can be configured via a JSON5 file passed via the
`-c`argument. You can see a commented example of such configuration file:
[`DEFAULT_CONFIG.json5`](DEFAULT_CONFIG.json5).
.
The `"dds"` part of this same configuration file can also be used in the
configuration file for the zenoh router (within its `"plugins"` part). The
router will automatically try to load the plugin library (`zenoh-plugin_dds`)
at startup and apply its configuration.
.
`zenoh-bridge-dds` also accepts the following arguments. If set, each argument
will override the similar setting from the configuration file:
* zenoh-related arguments:
- **`-c, --config `** : a config file
- **`-m, --mode `** : The zenoh session mode. Default: `peer` Possible
values: `peer` or `client`.
See [zenoh
documentation](https://zenoh.io/docs/getting-started/key-concepts/#deployment-units)
for more details.
- **`-l, --listen `** : A locator on which this router will listen
for incoming sessions. Repeat this option to open several listeners. Example of
locator: `tcp/localhost:7447`.
- **`-e, --peer `** : A peer locator this router will try to
connect to (typically another bridge or a zenoh router). Repeat this option to
connect to several peers. Example of locator: `tcp/:7447`.
- **`--no-multicast-scouting`** : disable the zenoh scouting protocol that
allows automatic discovery of zenoh peers and routers.
- **`-i, --id `** : The identifier (as an hexadecimal string -
e.g.: 0A0B23...) that the zenoh bridge must use. **WARNING: this identifier
must be unique in the system!** If not set, a random UUIDv4 will be used.
- **`--group-member-id `** : The bridges are supervising each other via
zenoh liveliness tokens. This option allows to set a custom identifier for the
bridge, that will be used the liveliness token key (if not specified, the zenoh
UUID is used).
- **`--rest-http-port `** : set the REST API http port
(default: 8000)
* DDS-related arguments:
- **`-d, --domain `** : The DDS Domain ID. By default set to `0`, or to
`"$ROS_DOMAIN_ID"` is this environment variable is defined.
- **`--dds-localhost-only`** : If set, the DDS discovery and traffic will
occur only on the localhost interface (127.0.0.1).
By default set to false, unless the "ROS_LOCALHOST_ONLY=1" environment
variable is defined.
- **`--dds-enable-shm`** : If set, DDS will be configured to use shared
memory. Requires the bridge to be built with the 'dds_shm' feature for this
option to valid.
By default set to false.
- **`-f, --fwd-discovery`** : When set, rather than creating a local route
when discovering a local DDS entity, this discovery info is forwarded to the
remote plugins/bridges. Those will create the routes, including a replica of
the discovered entity. More details
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
- **`-s, --scope `** : A string used as prefix to scope DDS traffic
when mapped to zenoh keys.
- **`-a, --allow `** : A regular expression matching the set of
'partition/topic-name' that must be routed via zenoh.
By default, all partitions and topics are allowed.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
Examples of expressions:
- `.*/TopicA` will allow only the `TopicA` to be routed, whatever the
partition.
- `PartitionX/.*` will allow all the topics to be routed, but only on
`PartitionX`.
- `cmd_vel|rosout` will allow only the topics containing `cmd_vel` or
`rosout` in their name or partition name to be routed.
- **`--deny `** : A regular expression matching the set of
'partition/topic-name' that must NOT be routed via zenoh.
By default, no partitions and no topics are denied.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
- **`--max-frequency ...`** : specifies a maximum frequency of data
routing over zenoh per-topic. The string must have the format `"regex=float"`
where:
- `"regex"` is a regular expression matching the set of
'partition/topic-name' for which the data (per DDS instance) must be routedat
no higher rate than associated max frequency (same syntax than --allow option).
- `"float"` is the maximum frequency in Hertz; if publication rate is
higher, downsampling will occur when routing.
.
(usable multiple times)
- **`--queries-timeout `**: A duration in seconds (default: 5.0
sec) that will be used as a timeout when the bridge
queries any other remote bridge for discovery information and for
historical data for TRANSIENT_LOCAL DDS Readers it serves
(i.e. if the query to the remote bridge exceed the timeout, some
historical samples might be not routed to the Readers,
but the route will not be blocked forever).
- **`-w, --generalise-pub `** : A list of key expressions to use
for generalising the declaration of
the zenoh publications, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
- **`-r, --generalise-sub `** : A list of key expressions to use
for generalising the declaration of
the zenoh subscriptions, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
.
## Admin space
.
The zenoh bridge for DDS exposes an administration space allowing to browse the
DDS entities that have been discovered (with their QoS), and the routes that
have been established between DDS and zenoh.
This administration space is accessible via any zenoh API, including the REST
API that you can activate at `zenoh-bridge-dds` startup using the
`--rest-http-port` argument.
.
The `zenoh-bridge-dds` exposes this administration space with paths prefixed by
`@/service//dds` (where `` is the unique identifier of the bridge
instance). The informations are then organized with such paths:
- `@/service//dds/version` : the bridge version
- `@/service//dds/config` : the bridge configuration
- `@/service//dds/participant//reader//` : a discovered
DDS reader on ``
- `@/service//dds/participant//writer//` : a discovered
DDS reader on ``
- `@/service//dds/route/from_dds/` : a route established
from a DDS writer to a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources)).
- `@/service//dds/route/to_dds/` : a route established
from a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources))..
.
Example of queries on administration space using the REST API with the `curl`
command line tool (don't forget to activate the REST API with `--rest-http-port
8000` argument):
- List all the DDS entities that have been discovered:
```bash
curl http://localhost:8000/@/service/**/participant/**
```
- List all established routes:
```bash
curl http://localhost:8000/@/service/**/route/**
```
- List all discovered DDS entities and established route for topic `cmd_vel`:
```bash
curl http://localhost:8000/@/service/**/cmd_vel
```
.
> _Pro tip: pipe the result into [**jq**](https://stedolan.github.io/jq/)
command for JSON pretty print or transformation._
.
## Architecture details
.
Whether it's built as a library or as a standalone executable, the **zenoh
bridge for DDS** do the same things:
- in default mode:
- it discovers the DDS readers and writers declared by any DDS application,
via the standard DDS discovery protocol (that uses UDP multicast)
- it creates a mirror DDS writer or reader for each discovered reader or
writer (using the same QoS)
- if maps the discovered DDS topics and partitions to zenoh keys (see mapping
details below)
- it forwards user's data from a DDS topic to the corresponding zenoh key,
and vice versa
- it does not forward to the remote bridge any DDS discovery information
.
- in "forward discovery" mode
- it behaves as described
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
### _Mapping of DDS topics to zenoh keys_
The mapping between DDS and zenoh is rather straightforward: given a DDS
Reader/Writer for topic **`A`** without the partition QoS set, then the
equivalent zenoh key will have the same name: **`A`**.
If a partition QoS **`P`** is defined, the equivalent zenoh key will be named
as **`P/A`**.
.
Optionally, the bridge can be configured with a **scope** that will be used as
a prefix to each zenoh key.
That is, for scope **`S`** the equivalent zenoh key will be:
- **`S/A`** for a topic **`A`** without partition
- **`S/P/A`** for a topic **`A`** and a partition **`P`**
.
### _Mapping ROS2 names to zenoh keys_
The mapping from ROS2 topics and services name to DDS topics is specified
[here](https://design.ros2.org/articles/topic_and_service_names.html#mapping-of-ros-2-topic-and-service-names-to-dds-concepts).
Notice that ROS2 does not use the DDS partitions.
As a consequence of this mapping and of the DDS to zenoh mapping specified
above, here are some examples of mapping from ROS2 names to zenoh keys:
.
| ROS2 names | DDS Topics names | zenoh keys (no scope) | zenoh keys (if
scope="`myscope`") |
| --- | --- | --- | --- |
| topic: `/rosout` | `rt/rosout` | `rt/rosout` | `myscope/rt/rosout` |
| topic: `/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` |
`myscope/rt/turtle1/cmd_vel` |
| service: `/turtle1/set_pen` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`myscope/rq/turtle1/set_penRequest`
`myscope/rr/turtle1/set_penReply` |
| action: `/turtle1/rotate_absolute` |
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`myscope/rq/turtle1/rotate_absolute/_action/send_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/send_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/get_resultRequest`
`myscope/rr/turtle1/rotate_absolute/_action/get_resultReply`
`myscope/rt/turtle1/rotate_absolute/_action/status`
`myscope/rt/turtle1/rotate_absolute/_action/feedback`
|
| all parameters for node `turtlesim`|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`myscope/rq/turtlesim/list_parametersRequest`
`myscope/rr/turtlesim/list_parametersReply`
`myscope/rq/turtlesim/describe_parametersRequest`
`myscope/rr/turtlesim/describe_parametersReply`
`myscope/rq/turtlesim/get_parametersRequest`
`myscope/rr/turtlesim/get_parametersReply`
`myscope/rr/turtlesim/get_parameter_typesReply`
`myscope/rq/turtlesim/get_parameter_typesRequest`
`myscope/rq/turtlesim/set_parametersRequest`
`myscope/rr/turtlesim/set_parametersReply`
`myscope/rq/turtlesim/set_parameters_atomicallyRequest`
`myscope/rr/turtlesim/set_parameters_atomicallyReply`
|
| specific ROS discovery topic | `ros_discovery_info` | `ros_discovery_info` |
`myscope/ros_discovery_info`
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-plugin-dds
Architecture: armhf
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 3572
Depends: zenohd (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-plugin-dds_0.10.0-rc_armhf.deb
Size: 1211856
MD5sum: f4fe8ab19b208d28623176771754b4fe
SHA1: 53bb3cec4ee93056decdc5aad60fae935de1c274
SHA256: 88468db4f5cfa0f8789baa07e1c577d9ea314cc94f1f36419165f24e710546bc
SHA512: 2c34367fdf5a9448ea4a2b32c2296fa19015d9f889f4e2ed9b8c0306e15d7af963b6621cf18ad8ff04e20ee025c55652473ae010d6aea3120fd393898af61880
Homepage: http://zenoh.io
Description: Zenoh plugin for ROS2 and DDS in general
.
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# DDS plugin and standalone `zenoh-bridge-dds`
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Docker image:** see [below](#Docker-image)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
## Background
The Data Distribution Service (DDS) is a standard for data-centric publish
subscribe. Whilst DDS has been around for quite some time and has a long
history of deployments in various industries, it has recently gained quite a
bit of attentions thanks to its adoption by the Robotic Operating System (ROS2)
-- where it is used for communication between ROS2 nodes.
.
## Robot Swarms and Edge Robotics
As mentioned above, ROS2 has adopted DDS as the mechanism to exchange data
between nodes within and potentially across a robot. That said, due to some of
the very core assumptions at the foundations of the DDS wire-protocol, beside
the fact that it leverages UDP/IP multicast for communication, it is not so
straightforward to scale DDS communication over a WAN or across multiple LANs.
Zenoh, on the other hand was designed since its inception to operate at
Internet Scale.
.

.
Thus, the main motivations to have a **DDS plugin** for **Eclipse zenoh** are:
.
- Facilitate the interconnection of robot swarms.
- Support use cases of edge robotics.
- Give the possibility to use **zenoh**'s geo-distributed storage and query
system to better manage robot's data.
.
As any plugin for Eclipse zenoh, it can be dynamically loaded by a zenoh
router, at startup or at runtime.
In addition, this project also provides a standalone version of this plugin as
an executable binary named `zenoh-bridge-dds`.
.
## How to install it
.
To install the latest release of either the DDS plugin for the Zenoh router,
either the `zenoh-bridge-dds` standalone executable, you can do as follows:
.
### Manual installation (all platforms)
.
All release packages can be downloaded from:
- https://download.eclipse.org/zenoh/zenoh-plugin-dds/latest/
.
Each subdirectory has the name of the Rust target. See the platforms each
target corresponds to on
https://doc.rust-lang.org/stable/rustc/platform-support.html
.
Choose your platform and download:
- the `zenoh-plugin-dds--.zip` file for the plugin.
Then unzip it in the same directory than `zenohd` or to any directory where
it can find the plugin library (e.g. /usr/lib)
- the `zenoh-bridge-dds--.zip` file for the standalone
executable.
Then unzip it where you want, and run the extracted `zenoh-bridge-dds`
binary.
.
### Linux Debian
.
Add Eclipse Zenoh private repository to the sources list:
.
```bash
echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" |
sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update
```
Then either:
- install the plugin with: `sudo apt install zenoh-plugin-dds`.
- install the standalone executable with: `sudo apt install
zenoh-bridge-dds`.
.
## How to build it
.
> :warning: **WARNING** :warning: : Zenoh and its ecosystem are under active
development. When you build from git, make sure you also build from git any
other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.).
It may happen that some changes in git are not compatible with the most recent
packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in
mantaining compatibility between the various git repositories in the Zenoh
project.
.
> :warning: **WARNING** :warning: : As Rust doesn't have a stable ABI, the
plugins should be
built with the exact same Rust version than `zenohd`, and using for `zenoh`
dependency the same version (or commit number) than 'zenohd'.
Otherwise, incompatibilities in memory mapping of shared types between `zenohd`
and the library can lead to a `"SIGSEV"` crash.
.
In order to build the zenoh bridge for DDS you need first to install the
following dependencies:
.
- [Rust](https://www.rust-lang.org/tools/install). If you already have the Rust
toolchain installed, make sure it is up-to-date with:
.
```bash
$ rustup update
```
.
- On Linux, make sure the `llvm` and `clang` development packages are
installed:
- on Debians do: `sudo apt install llvm-dev libclang-dev`
- on CentOS or RHEL do: `sudo yum install llvm-devel clang-devel`
- on Alpine do: `apk install llvm11-dev clang-dev`
- [CMake](https://cmake.org/download/) (to build CycloneDDS which is a native
dependency)
.
Once these dependencies are in place, you may clone the repository on your
machine:
.
```bash
$ git clone https://github.com/eclipse-zenoh/zenoh-plugin-dds.git
$ cd zenoh-plugin-dds
```
> :warning: **WARNING** :warning: : On Linux, don't use `cargo build` command
without specifying a package with `-p`. Building both `zenoh-plugin-dds`
(plugin library) and `zenoh-bridge-dds` (standalone executable) together will
lead to a `multiple definition of `load_plugin'` error at link time. See
[#117](https://github.com/eclipse-zenoh/zenoh-plugin-dds/issues/117#issuecomment-1439694331)
for explanations.
.
You can then choose between building the zenoh bridge for DDS:
- as a plugin library that can be dynamically loaded by the zenoh router
(`zenohd`):
```bash
$ cargo build --release -p zenoh-plugin-dds
```
The plugin shared library (`*.so` on Linux, `*.dylib` on Mac OS, `*.dll` on
Windows) will be generated in the `target/release` subdirectory.
.
- or as a standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds
```
The **`zenoh-bridge-dds`** binary will be generated in the `target/release`
sub-directory.
.
.
### Enabling Cyclone DDS Shared Memory Support
.
Cyclone DDS Shared memory support is provided by the [Iceoryx
library](https://iceoryx.io/). Iceoryx introduces additional system
requirements which are documented
[here](https://iceoryx.io/v2.0.1/getting-started/installation/#dependencies).
.
To build the zenoh bridge for DDS with support for shared memory the `dds_shm`
optional feature must be enabled during the build process as follows:
- plugin library:
```bash
$ cargo build --release -p zenoh-plugin-dds --features dds_shm
```
.
- standalone executable binary:
```bash
$ cargo build --release -p zenoh-bridge-dds --features dds_shm
```
.
**Note:** Iceoryx does not need to be installed to build the bridge when the
`dds_shm` feature is enabled. Iceoryx will be automatically downloaded,
compiled, and statically linked into the zenoh bridge as part of the cargo
build process.
.
When the zenoh bridge is configured to use DDS shared memory (see
[Configuration](#configuration)) the **Iceoryx RouDi daemon (`iox-roudi`)**
must be running in order for the bridge to start successfully. If not started
the bridge will wait for a period of time for the daemon to become available
before timing out and terminating.
.
When building the zenoh bridge with the `dds_shm` feature enabled the
`iox-roudi` daemon is also built for convenience. The daemon can be found under
`target/debug|release/build/cyclors-/out/iceoryx-build/bin/iox-roudi`.
.
See
[here](https://cyclonedds.io/docs/cyclonedds/latest/shared_memory/shared_memory.html)
for more details of shared memory support in Cyclone DDS.
.
.
### ROS2 package
If you're a ROS2 user, you can also build `zenoh-bridge-dds` as a ROS package
running:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release
```
The `rosdep` command will automatically install *Rust* and *clang* as build
dependencies.
.
If you want to cross-compile the package on x86 device for any target, you can
use the following command:
```bash
rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_dds --cmake-args
-DCMAKE_BUILD_TYPE=Release --cmake-args -DCROSS_ARCH=
```
where `` is the target architecture (e.g. `aarch64-unknown-linux-gnu`).
The architechture list can be found
[here](https://doc.rust-lang.org/nightly/rustc/platform-support.html).
.
The cross-compilation uses `zig` as a linker. You can install it with
instructions in [here](https://ziglang.org/download/). Also, the `zigbuild`
package is required to be installed on the target device. You can install it
with instructions in
[here](https://github.com/rust-cross/cargo-zigbuild#installation).
.
## Docker image
The **`zenoh-bridge-dds`** standalone executable is also available as a [Docker
images](https://hub.docker.com/r/eclipse/zenoh-bridge-dds/tags?page=1&ordering=last_updated)
for both amd64 and arm64. To get it, do:
- `docker pull eclipse/zenoh-bridge-dds:latest` for the latest release
- `docker pull eclipse/zenoh-bridge-dds:master` for the master branch version
(nightly build)
.
:warning: **However, notice that it's usage is limited to Docker on Linux and
using the `--net host` option.**
The cause being that DDS uses UDP multicast and Docker doesn't support UDP
multicast between a container and its host (see cases
[moby/moby#23659](https://github.com/moby/moby/issues/23659),
[moby/libnetwork#2397](https://github.com/moby/libnetwork/issues/2397) or
[moby/libnetwork#552](https://github.com/moby/libnetwork/issues/552)). The only
known way to make it work is to use the `--net host` option that is [only
supported on Linux hosts](https://docs.docker.com/network/host/).
.
Usage: **`docker run --init --net host eclipse/zenoh-bridge-dds`**
It supports the same command line arguments than the `zenoh-bridge-dds` (see
below or check with `-h` argument).
.
## For a quick test with ROS2 turtlesim
Prerequisites:
- A [ROS2 environment](http://docs.ros.org/en/galactic/Installation.html) (no
matter the DDS implementation as soon as it implements the standard DDSI
protocol - the default [Eclipse
CycloneDDS](https://github.com/eclipse-cyclonedds/cyclonedds) being just fine)
- The [turtlesim
package](http://docs.ros.org/en/galactic/Tutorials/Turtlesim/Introducing-Turtlesim.html#install-turtlesim)
.
### _1 host, 2 ROS domains_
For a quick test on a single host, you can run the `turtlesim_node` and the
`turtle_teleop_key` on distinct ROS domains. As soon as you run 2
`zenoh-bridge-dds` (1 per domain) the `turtle_teleop_key` can drive the
`turtlesim_node`.
Here are the commands to run:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 1`
- `./target/release/zenoh-bridge-dds -d 2`
.
Notice that by default the 2 bridges will discover each other using UDP
multicast.
.
### _2 hosts, avoiding UDP multicast communication_
By default DDS (and thus ROS2) uses UDP multicast for discovery and
publications. But on some networks, UDP multicast is not or badly supported.
In such cases, deploying the `zenoh-bridge-dds` on both hosts will make it to:
- limit the DDS discovery traffic, as detailled in [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
- route all the DDS publications made on UDP multicast by each node through
the zenoh protocol that by default uses TCP.
.
Here are the commands to test this configuration with turtlesim:
- on host 1:
- `ROS_DOMAIN_ID=1 ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -d 1 -l tcp/0.0.0.0:7447`
- on host 2:
- `ROS_DOMAIN_ID=2 ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -d 2 -e tcp/:7447` - where
`` is the IP of host 1
.
Notice that to avoid unwanted direct DDS communication, 2 disctinct ROS domains
are still used.
.
### _2 hosts, with an intermediate zenoh router in the cloud_
In case your 2 hosts can't have a point-to-point communication, you could
leverage a [zenoh
router](https://github.com/eclipse-zenoh/zenoh#how-to-build-it) deployed in a
cloud instance (any Linux VM will do the job). You just need to configure your
cloud instanse with a public IP and authorize the TCP port **7447**.
.
:warning: the zenoh protocol is still under development leading to possible
incompatibilities between the bridge and the router if their zenoh version
differ. Please make sure you use a zenoh router built from a recent commit id
from its `master` branch.
.
Here are the commands to test this configuration with turtlesim:
- on cloud VM:
- `zenohd`
- on host 1:
- `ros2 run turtlesim turtlesim_node`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
- on host 2:
- `ros2 run turtlesim turtle_teleop_key`
- `./target/release/zenoh-bridge-dds -e tcp/:7447`
_where `` is the IP of your cloud instance_
.
Notice that there is no need to use distinct ROS domain here, since the 2 hosts
are not supposed to directly communicate with each other.
.
## More advanced usage for ROS2
### _Full support of ROS graph and topic lists via the forward discovery mode_
By default the bridge doesn't route throught zenoh the DDS discovery traffic to
the remote bridges.
Meaning that, in case you use 2 **`zenoh-bridge-dds`** to interconnect 2 DDS
domains, the DDS entities discovered in one domain won't be advertised in the
other domain. Thus, the DDS data will be routed between the 2 domains only if
matching readers and writers are declared in the 2 domains independently.
.
This default behaviour has an impact on ROS2 behaviour: on one side of the
bridge the ROS graph might not reflect all the nodes from the other side of the
bridge. The `ros2 topic list` command might not list all the topics declared on
the other side. And the **ROS graph** is limited to the nodes in each domain.
.
But using the **`--fwd-discovery`** (or `-f`) option for all bridges make them
behave differently:
- each bridge will forward via zenoh the local DDS discovery data to the
remote bridges (in a more compact way than the original DDS discovery traffic)
- each bridge receiving DDS discovery data via zenoh will create a replica of
the DDS reader or writer, with similar QoS. Those replicas will serve the route
to/from zenoh, and will be discovered by the ROS2 nodes.
- each bridge will forward the `ros_discovery_info` data (in a less intensive
way than the original publications) to the remote bridges. On reception, the
remote bridges will convert the original entities' GIDs into the GIDs of the
corresponding replicas, and re-publish on DDS the `ros_discovery_info`. The
full ROS graph can then be discovered by the ROS2 nodes on each host.
### _Limiting the ROS2 topics, services, parameters or actions to be routed_
By default 2 zenoh bridges will route all ROS2 topics and services for which
they detect a Writer on one side and a Reader on the other side. But you might
want to avoid some topics and services to be routed by the bridge.
.
Starting `zenoh-bridge-dds` you can use the `--allow` argument to specify the
subset of topics and services that will be routed by the bridge. This argument
accepts a string wich is a regular expression that must match a substring of an
allowed zenoh key (see details of [mapping of ROS2 names to zenoh
keys](#mapping-ros2-names-to-zenoh-keys)).
.
Here are some examples of usage:
| `--allow` value | allowed ROS2 communication |
| :-- | :-- |
| `/rosout` | `/rosout`|
| `/rosout\|/turtle1/cmd_vel\|/turtle1/rotate_absolute` |
`/rosout`
`/turtle1/cmd_vel`
`/turtle1/rotate_absolute` |
| `/rosout\|/turtle1/` | `/rosout` and all `/turtle1` topics, services,
parameters and actions |
| `/turtle1/.*` | all topics and services with name containing `/turtle1/` |
| `/turtle1/` | same: all topics, services, parameters and actions with name
containing `/turtle1/` |
| `rt/turtle1` | all topics with name containing `/turtle1` (no services,
parameters or actions) |
| `rq/turtle1\|/rr/turtle1` | all services and parameters with name containing
`/turtle1` (no topics or actions) |
| `rq/turtlesim/.*parameter\|/rr/turtlesim/.*parameter` | all parameters with
name containing `/turtlesim` (no topics, services or actions) |
| `rq/turtle1/.*/_action\|/rr/turtle1/.*/_action` | all actions with name
containing `/turtle1` (no topics, services or parameters) |
.
### _Running several robots without changing the ROS2 configuration_
If you run similar robots in the same network, they will by default all us the
same DDS topics, leading to interferences in their operations.
A simple way to address this issue using the zenoh bridge is to:
- deploy 1 zenoh bridge per robot
- have each bridge started with the `--scope "/"` argument, each robot
having its own id.
- make sure each robot cannot directly communicate via DDS with another robot
by setting a distinct domain per robot, or configuring its network interface to
not route UDP multicast outside the host.
.
Using the `--scope` option, a prefix is added to each zenoh key
published/subscribed by the bridge (more details in [mapping of ROS2 names to
zenoh keys](#mapping-ros2-names-to-zenoh-keys)). To interact with a robot, a
remote ROS2 application must use a zenoh bridge configured with the same scope
than the robot.
.
### _Closer integration of ROS2 with zenoh_
As you understood, using the zenoh bridge, each ROS2 publications and
subscriptions are mapped to a zenoh key. Therefore, its relatively easy to
develop an application using one of the [zenoh
APIs](https://zenoh.io/docs/apis/apis/) to interact with one or more robot at
the same time.
.
See in details how to achieve that in [this
blog](https://zenoh.io/blog/2021-04-28-ros2-integration/).
.
## Configuration
.
`zenoh-bridge-dds` can be configured via a JSON5 file passed via the
`-c`argument. You can see a commented example of such configuration file:
[`DEFAULT_CONFIG.json5`](DEFAULT_CONFIG.json5).
.
The `"dds"` part of this same configuration file can also be used in the
configuration file for the zenoh router (within its `"plugins"` part). The
router will automatically try to load the plugin library (`zenoh-plugin_dds`)
at startup and apply its configuration.
.
`zenoh-bridge-dds` also accepts the following arguments. If set, each argument
will override the similar setting from the configuration file:
* zenoh-related arguments:
- **`-c, --config `** : a config file
- **`-m, --mode `** : The zenoh session mode. Default: `peer` Possible
values: `peer` or `client`.
See [zenoh
documentation](https://zenoh.io/docs/getting-started/key-concepts/#deployment-units)
for more details.
- **`-l, --listen `** : A locator on which this router will listen
for incoming sessions. Repeat this option to open several listeners. Example of
locator: `tcp/localhost:7447`.
- **`-e, --peer `** : A peer locator this router will try to
connect to (typically another bridge or a zenoh router). Repeat this option to
connect to several peers. Example of locator: `tcp/:7447`.
- **`--no-multicast-scouting`** : disable the zenoh scouting protocol that
allows automatic discovery of zenoh peers and routers.
- **`-i, --id `** : The identifier (as an hexadecimal string -
e.g.: 0A0B23...) that the zenoh bridge must use. **WARNING: this identifier
must be unique in the system!** If not set, a random UUIDv4 will be used.
- **`--group-member-id `** : The bridges are supervising each other via
zenoh liveliness tokens. This option allows to set a custom identifier for the
bridge, that will be used the liveliness token key (if not specified, the zenoh
UUID is used).
- **`--rest-http-port `** : set the REST API http port
(default: 8000)
* DDS-related arguments:
- **`-d, --domain `** : The DDS Domain ID. By default set to `0`, or to
`"$ROS_DOMAIN_ID"` is this environment variable is defined.
- **`--dds-localhost-only`** : If set, the DDS discovery and traffic will
occur only on the localhost interface (127.0.0.1).
By default set to false, unless the "ROS_LOCALHOST_ONLY=1" environment
variable is defined.
- **`--dds-enable-shm`** : If set, DDS will be configured to use shared
memory. Requires the bridge to be built with the 'dds_shm' feature for this
option to valid.
By default set to false.
- **`-f, --fwd-discovery`** : When set, rather than creating a local route
when discovering a local DDS entity, this discovery info is forwarded to the
remote plugins/bridges. Those will create the routes, including a replica of
the discovered entity. More details
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
- **`-s, --scope `** : A string used as prefix to scope DDS traffic
when mapped to zenoh keys.
- **`-a, --allow `** : A regular expression matching the set of
'partition/topic-name' that must be routed via zenoh.
By default, all partitions and topics are allowed.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
Examples of expressions:
- `.*/TopicA` will allow only the `TopicA` to be routed, whatever the
partition.
- `PartitionX/.*` will allow all the topics to be routed, but only on
`PartitionX`.
- `cmd_vel|rosout` will allow only the topics containing `cmd_vel` or
`rosout` in their name or partition name to be routed.
- **`--deny `** : A regular expression matching the set of
'partition/topic-name' that must NOT be routed via zenoh.
By default, no partitions and no topics are denied.
If both 'allow' and 'deny' are set a partition and/or topic will be
allowed if it matches only the 'allow' expression.
Repeat this option to configure several topic expressions. These
expressions are concatenated with '|'.
- **`--max-frequency ...`** : specifies a maximum frequency of data
routing over zenoh per-topic. The string must have the format `"regex=float"`
where:
- `"regex"` is a regular expression matching the set of
'partition/topic-name' for which the data (per DDS instance) must be routedat
no higher rate than associated max frequency (same syntax than --allow option).
- `"float"` is the maximum frequency in Hertz; if publication rate is
higher, downsampling will occur when routing.
.
(usable multiple times)
- **`--queries-timeout `**: A duration in seconds (default: 5.0
sec) that will be used as a timeout when the bridge
queries any other remote bridge for discovery information and for
historical data for TRANSIENT_LOCAL DDS Readers it serves
(i.e. if the query to the remote bridge exceed the timeout, some
historical samples might be not routed to the Readers,
but the route will not be blocked forever).
- **`-w, --generalise-pub `** : A list of key expressions to use
for generalising the declaration of
the zenoh publications, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
- **`-r, --generalise-sub `** : A list of key expressions to use
for generalising the declaration of
the zenoh subscriptions, and thus minimizing the discovery traffic (usable
multiple times).
See [this
blog](https://zenoh.io/blog/2021-03-23-discovery/#leveraging-resource-generalisation)
for more details.
.
## Admin space
.
The zenoh bridge for DDS exposes an administration space allowing to browse the
DDS entities that have been discovered (with their QoS), and the routes that
have been established between DDS and zenoh.
This administration space is accessible via any zenoh API, including the REST
API that you can activate at `zenoh-bridge-dds` startup using the
`--rest-http-port` argument.
.
The `zenoh-bridge-dds` exposes this administration space with paths prefixed by
`@/service//dds` (where `` is the unique identifier of the bridge
instance). The informations are then organized with such paths:
- `@/service//dds/version` : the bridge version
- `@/service//dds/config` : the bridge configuration
- `@/service//dds/participant//reader//` : a discovered
DDS reader on ``
- `@/service//dds/participant//writer//` : a discovered
DDS reader on ``
- `@/service//dds/route/from_dds/` : a route established
from a DDS writer to a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources)).
- `@/service//dds/route/to_dds/` : a route established
from a zenoh key named `` (see [mapping
rules](#mapping-dds-topics-to-zenoh-resources))..
.
Example of queries on administration space using the REST API with the `curl`
command line tool (don't forget to activate the REST API with `--rest-http-port
8000` argument):
- List all the DDS entities that have been discovered:
```bash
curl http://localhost:8000/@/service/**/participant/**
```
- List all established routes:
```bash
curl http://localhost:8000/@/service/**/route/**
```
- List all discovered DDS entities and established route for topic `cmd_vel`:
```bash
curl http://localhost:8000/@/service/**/cmd_vel
```
.
> _Pro tip: pipe the result into [**jq**](https://stedolan.github.io/jq/)
command for JSON pretty print or transformation._
.
## Architecture details
.
Whether it's built as a library or as a standalone executable, the **zenoh
bridge for DDS** do the same things:
- in default mode:
- it discovers the DDS readers and writers declared by any DDS application,
via the standard DDS discovery protocol (that uses UDP multicast)
- it creates a mirror DDS writer or reader for each discovered reader or
writer (using the same QoS)
- if maps the discovered DDS topics and partitions to zenoh keys (see mapping
details below)
- it forwards user's data from a DDS topic to the corresponding zenoh key,
and vice versa
- it does not forward to the remote bridge any DDS discovery information
.
- in "forward discovery" mode
- it behaves as described
[here](#full-support-of-ros-graph-and-topic-lists-via-the-forward-discovery-mode)
### _Mapping of DDS topics to zenoh keys_
The mapping between DDS and zenoh is rather straightforward: given a DDS
Reader/Writer for topic **`A`** without the partition QoS set, then the
equivalent zenoh key will have the same name: **`A`**.
If a partition QoS **`P`** is defined, the equivalent zenoh key will be named
as **`P/A`**.
.
Optionally, the bridge can be configured with a **scope** that will be used as
a prefix to each zenoh key.
That is, for scope **`S`** the equivalent zenoh key will be:
- **`S/A`** for a topic **`A`** without partition
- **`S/P/A`** for a topic **`A`** and a partition **`P`**
.
### _Mapping ROS2 names to zenoh keys_
The mapping from ROS2 topics and services name to DDS topics is specified
[here](https://design.ros2.org/articles/topic_and_service_names.html#mapping-of-ros-2-topic-and-service-names-to-dds-concepts).
Notice that ROS2 does not use the DDS partitions.
As a consequence of this mapping and of the DDS to zenoh mapping specified
above, here are some examples of mapping from ROS2 names to zenoh keys:
.
| ROS2 names | DDS Topics names | zenoh keys (no scope) | zenoh keys (if
scope="`myscope`") |
| --- | --- | --- | --- |
| topic: `/rosout` | `rt/rosout` | `rt/rosout` | `myscope/rt/rosout` |
| topic: `/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` | `rt/turtle1/cmd_vel` |
`myscope/rt/turtle1/cmd_vel` |
| service: `/turtle1/set_pen` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`rq/turtle1/set_penRequest`
`rr/turtle1/set_penReply` |
`myscope/rq/turtle1/set_penRequest`
`myscope/rr/turtle1/set_penReply` |
| action: `/turtle1/rotate_absolute` |
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`rq/turtle1/rotate_absolute/_action/send_goalRequest`
`rr/turtle1/rotate_absolute/_action/send_goalReply`
`rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`rq/turtle1/rotate_absolute/_action/get_resultRequest`
`rr/turtle1/rotate_absolute/_action/get_resultReply`
`rt/turtle1/rotate_absolute/_action/status`
`rt/turtle1/rotate_absolute/_action/feedback`
|
`myscope/rq/turtle1/rotate_absolute/_action/send_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/send_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/cancel_goalRequest`
`myscope/rr/turtle1/rotate_absolute/_action/cancel_goalReply`
`myscope/rq/turtle1/rotate_absolute/_action/get_resultRequest`
`myscope/rr/turtle1/rotate_absolute/_action/get_resultReply`
`myscope/rt/turtle1/rotate_absolute/_action/status`
`myscope/rt/turtle1/rotate_absolute/_action/feedback`
|
| all parameters for node `turtlesim`|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`rq/turtlesim/list_parametersRequest`
`rr/turtlesim/list_parametersReply`
`rq/turtlesim/describe_parametersRequest`
`rr/turtlesim/describe_parametersReply`
`rq/turtlesim/get_parametersRequest`
`rr/turtlesim/get_parametersReply`
`rr/turtlesim/get_parameter_typesReply`
`rq/turtlesim/get_parameter_typesRequest`
`rq/turtlesim/set_parametersRequest`
`rr/turtlesim/set_parametersReply`
`rq/turtlesim/set_parameters_atomicallyRequest`
`rr/turtlesim/set_parameters_atomicallyReply`
|
`myscope/rq/turtlesim/list_parametersRequest`
`myscope/rr/turtlesim/list_parametersReply`
`myscope/rq/turtlesim/describe_parametersRequest`
`myscope/rr/turtlesim/describe_parametersReply`
`myscope/rq/turtlesim/get_parametersRequest`
`myscope/rr/turtlesim/get_parametersReply`
`myscope/rr/turtlesim/get_parameter_typesReply`
`myscope/rq/turtlesim/get_parameter_typesRequest`
`myscope/rq/turtlesim/set_parametersRequest`
`myscope/rr/turtlesim/set_parametersReply`
`myscope/rq/turtlesim/set_parameters_atomicallyRequest`
`myscope/rr/turtlesim/set_parameters_atomicallyReply`
|
| specific ROS discovery topic | `ros_discovery_info` | `ros_discovery_info` |
`myscope/ros_discovery_info`
Vcs-Browser: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Vcs-Git: https://github.com/eclipse-zenoh/zenoh-plugin-dds
Package: zenoh-plugin-mqtt
Architecture: amd64
Version: 0.10.0-rc
Priority: optional
Section: net
Maintainer: zenoh-dev@eclipse.org
Installed-Size: 3590
Depends: zenohd (=0.10.0-rc)
Filename: ./0.10.0-rc/zenoh-plugin-mqtt_0.10.0-rc_amd64.deb
Size: 1040636
MD5sum: 460d01843e56597e299f39bc520d1e45
SHA1: 13119bbb2f799de34ac220ae68b6185bce3cc094
SHA256: 81775c113d4ffef98e6b1912e9a8ce269e76a29a4ffb5d9ac684921c9fc4005b
SHA512: 8ad44331374c6663a83eb56ff4ad5c8212e46426bc966d95bd11a2bdaff2ac6c7c52b90af1083022ddeb34feeb4ec64644ab247ebcfefd52199597d62c181cc2
Homepage: http://zenoh.io
Description: Zenoh plugin for MQTT
.
[](https://github.com/eclipse-zenoh/zenoh-plugin-mqtt/actions?query=workflow%3ARust)
[](https://github.com/eclipse-zenoh/roadmap/discussions)
[](https://discord.gg/2GJ958VuHs)
[](https://choosealicense.com/licenses/epl-2.0/)
[](https://opensource.org/licenses/Apache-2.0)
.
# Eclipse Zenoh
The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
.
Zenoh (pronounce _/zeno/_) unifies data in motion, data at rest and
computations. It carefully blends traditional pub/sub with geo-distributed
storages, queries and computations, while retaining a level of time and space
efficiency that is well beyond any of the mainstream stacks.
.
Check the website [zenoh.io](http://zenoh.io) and the
[roadmap](https://github.com/eclipse-zenoh/roadmap) for more detailed
information.
.
-------------------------------
# MQTT plugin and standalone `zenoh-bridge-mqtt`
.
:point_right: **Install latest release:** see [below](#How-to-install-it)
.
:point_right: **Docker image:** see [below](#Docker-image)
.
:point_right: **Build "master" branch:** see [below](#How-to-build-it)
.
## Background
.
[MQTT](https://mqtt.org/) is a pub/sub protocol leveraging a broker to route
the messages between the MQTT clients.
The MQTT plugin for Eclipse Zenoh acts as a MQTT broker, accepting connections
from MQTT clients (V3 and V5) and translating the MQTT pub/sub into a Zenoh
pub/sub.
I.e.:
- a MQTT publication on topic `device/123/temperature` is routed as a Zenoh
publication on key expression `device/123/temperature`
- a MQTT subscription on topic `device/#` is mapped to a Zenoh subscription on
key expression `device/**`
.
This allows a close intergration of any MQTT system with Zenoh, but also brings
to MQTT systems the benefits of a Zenoh routing infrastructure.
Some examples of use cases:
- Routing MQTT from the device to the Edge and to the Cloud
- Bridging 2 distinct MQTT systems across the Internet, with NAT traversal
- Pub/sub to MQTT via the Zenoh REST API
- MQTT-ROS2 (robot) communication
- Store MQTT publications in any Zenoh storage (RocksDB, InfluxDB, file
system...)
- MQTT record/replay with InfluxDB as a storage
.
The MQTT plugin for Eclipse Zenoh is available either as a dynamic library to
be loaded by the Zenoh router (`zenohd`), either as a standalone executable
(`zenoh-bridge-mqtt`) that can acts as a Zenoh client or peer.
.
## Configuration
.
`zenoh-bridge-mqtt` can be configured via a JSON5 file passed via the
`-c`argument. You can see a commented example of such configuration file:
[`DEFAULT_CONFIG.json5`](DEFAULT_CONFIG.json5).
.
The `"mqtt"` part of this same configuration file can also be used in the
configuration file for the zenoh router (within its `"plugins"` part). The
router will automatically try to load the plugin library (`zenoh_plugin_mqtt`)
at startup and apply its configuration.
.
`zenoh-bridge-mqtt` also accepts the following arguments. If set, each argument
will override the similar setting from the configuration file:
* zenoh-related arguments:
- **`-c, --config `** : a config file
- **`-m, --mode `** : The zenoh session mode. Default: `peer` Possible
values: `peer` or `client`.
See [zenoh
documentation](https://zenoh.io/docs/getting-started/key-concepts/#deployment-units)
for more details.
- **`-l, --listen `** : A locator on which this router will listen
for incoming sessions. Repeat this option to open several listeners. Example of
locator: `tcp/localhost:7447`.
- **`-e, --peer `** : A peer locator this router will try to
connect to (typically another bridge or a zenoh router). Repeat this option to
connect to several peers. Example of locator: `tcp/:7447`.
- **`--no-multicast-scouting`** : disable the zenoh scouting protocol that
allows automatic discovery of zenoh peers and routers.
- **`-i, --id `** : The identifier (as an hexadecimal string -
e.g.: 0A0B23...) that the zenoh bridge must use. **WARNING: this identifier
must be unique in the system!** If not set, a random UUIDv4 will be used.
- **`--rest-http-port [PORT | IP:PORT]`** : Configures HTTP interface for
the REST API (disabled by default, setting this option enables it). Accepted
values:
- a port number
- a string with format `:` (to bind the HTTP
server to a specific interface).
* MQTT-related arguments:
- **`-p, --port [PORT | IP:PORT]`** : The address to bind the MQTT server.
Default: `"0.0.0.0:1883"`. Accepted values:
- a port number (`"0.0.0.0"` will be used as IP to bind, meaning any
interface of the host)
- a string with format `:` (to bind the MQTT
server to a specific interface).
- **`-s, --scope `** : A string added as prefix to all routed MQTT
topics when mapped to a zenoh key expression. This should be used to avoid
conflicts when several distinct MQTT systems using the same topics names are
routed via Zenoh.
- **`-a, --allow `** : A regular expression matching the MQTT topic
name that must be routed via zenoh. By default all topics are allowed. If both
`--allow` and `--deny` are set a topic will be allowed if it matches only the
'allow' expression.
- **`--deny `** : A regular expression matching the MQTT topic name
that must not be routed via zenoh. By default no topics are denied. If both
`--allow` and `--deny` are set a topic will be allowed if it matches only the
'allow' expression.
- **`-w, --generalise-pub