API review
Proposer: The catkin team
Present at review:
- Thibault Kruse
- Dirk Thomas
- Andrew Somerville
- Lorenz Mösenlechner
- Jack O'Quin
- Tully Foote
- Jonathan Bohren
Contents
-
API review
- Prerequisites
- Restrictions
- Environment variables
- rosbuild (rb)
- Catkin (ctk)
- Alternatives Comparison
- Feature map
-
Use cases
- Installing ROS+stacks from source
- Adding a stack from source
- Viewing source of installed package for debugging
- Removing a previously added-from-source stack
- Cross compiling stacks
- Daily work
- Packaging stacks
- Discovering stack / package dependencies using the dependency graph
- Wrap 3rd party libraries inside a package
- Suggestions
The following is a reconstruction of the decisional process and information gathered from different sources, which might be wrong and/or outdated.
The introduction of catkin as a replacement for rosbuild has raised a number of discussions. The change in the approach needs to be justified by a change of requirements/constraints. This section tries to summarize the justification for catkin design decisions as discused in several mailing list threads and documentation. For the discussions, see ros-users and the ROS Buildsystem SIG
Since the workflow from rosmake/rosbuild, vanilla cmake and catkin is very different it is important to understand the implications and forces.
Prerequisites
Catkin strives to be a tool that makes setting up developer environments, building, cross compiling and packaging easier. It competes with rosbuild, vanilla cmake, and other build projects like Autotools.
To understand Catkin, several concepts are helpful in discussion (also see the catkin glossary)
Lifecycle
The lifecycle of software development has several phases, in c++ those include:
configuring:
This step prepares the building step, it allows to modify parameters. As an example, it can define whether libraries contain debug information, or it can define parameters for cross-compilation. In practice this is what happens e.g. during the cmake command.
building:
based in a set of source files binary files are created (including libraries and generated code). In practice this is what happens during the make command
installing-from-source:
- after building a set of source and binary files are copied to a location in the environment
packaging:
- create an archive file for a package manager (e.g. apt-get)
cross-compile:
is similar to building but for a different target architecture (e.g. building OSX or Windows binaries on a Linux machine)
installing:
installation of a package which was created by packaging
Build and install locations
Building and installing created files somewhere, and apart of the challenge is to select useful places to build and install to.
system install space/global install space:
These are folders outside the user home, e.g. /opt/ros/... or /usr/local/lib/... on Linux systems. Installing here affects all users (e.g. on a robot), and usually require superuser (root) privileges. Installing to /opt/.../ requires the environment to point to that (Meaning you need to source something like a setup.sh`)
userland install space/`local install space:
These are folders inside the use home, e.g. ~/local on linux systems. Installing here affects only that user, and no root provileges are required.
workspace:
A folder inside the user home where source projects are put into. A user may have multiple workspaces, and a workspace may have multiple projects. E.g. ~/ros/fuerte
build space:
- where compile results (executables and libraries) are put into during building
workspace build space:
A build space for all projects inside a workspace, e.g. ~/ros/fuerte/build
workspace project:
A source project inside a workspace, e.g. ~/ros/fuerte/navigation
Out-of-source project build space:
A build space inside a workspace project, for executables and libnraries of that project. e.g. ~/ros/fuerte/navigation/build.
In-source project build space:
- If the workspace project and its project build space are the same, then this is called an "In-source build space".
The above definitions are important because vanilla cmake, rosbuild and catkin use different concepts for managing build and install spaces.
Artefact groups
- compilation unit:
- A cmake/make target (e.g. one executable, one library)
- build unit:
- the lowest level where you could in theory call just "make" ( ~= rosbuild package)
- build group:
- a set of build units with strong API/functionality dependencies that they belong in the same release unit. But a release unit may contain multiple independent build groups.
- distribution unit:
- that which will be converted into e.g. a debian package
- packaging unit:
- distribution unit (just easier name to relate to the process of packaging)
- release unit:
- that which lives in the same VCS (sub-)tree and gets released together by giving a single release tag.
- Variant:
- A set of many distribution units to be installed with a single command
Since both rosbuild and catkin follow different design goals related to building, packaging and releasing, these definitions are important.
It is common to use the concept build group == release unit == packaging unit, because it is easiest to understand and support.
Restrictions
- One distribution unit (DU) which is distributed as a Debian package (currently stacks) should come from one single repository. Spreading them over multiple reps would make the system unnecessary complicated: CI-server jobs based on changes in multiple reps, versioning/tagging across multiple reps, etc.
Environment variables
A set of environment variables is used to influence how binaries, libraries, etc. are discovered. Important in this context are:
PATH: a list of directories where executables are looked up
LD_LIBRARY_PATH: a list of directories where libraries are looked up
PYTHONPATH: a list of paths where Python modules are looked up
PKG_CONFIG_PATH: a list of paths where pkg-config searches for PKG.pc files
CMAKE_INSTALL_PREFIX: a list of paths where CMake searches for PKG-config.cmake files used for find_package()
rosbuild (rb)
Design Goals
rb_G1: building generates an environment which enables running binaries
- rb_G2: machine-readable meta-information
- rb_G3: Single command build of all interdependent software projects
- rb_G4: release multiple independent packages in just one step (using stacks)
rb_G5: when running binaries after building uncompiled resources should be used directly (without copying them) so that changes are immediately effective
- rb_G6: tightly couple multiple packages and ship them always together
- rb_G7: workspace toolset to run configure and build step for each package in isolation
- rb_G8: Fool-proof single invocation that attempts to configure and build all packages in environment
Design Decisions
- rb_DD1: separate release unit (stack) from build unit (package) [rb_G4]
- rb_DD2: declare package-to-package dependencies and compiler flags in manifests [rb_G2, rb_G3, rb_G4]
- rb_DD3: cmake invoked per package, in-place builds (CMAKE_BUILD_DIR == CMAKE_SOURCE_DIR) [rg_G1, rb_G7]
- rb_DD4: Python parsing of manifest and custom creation of a build order for packages [rb_G3, rb_DD2, rb_DD5]
- rb_DD5: rosmake wrapper to cmake for single invocation [rb_G3]
- rb_DD6: In-source build and ROS_PACKAGE_PATH for resource location in environment [rb_G1, rb_G5]
- rb_DD7: grouping several packages into a single release unit (stack) [rb_G4, rb_G6]
- rb_DD8: rosmake attempts to invoke cmake and make for all packages in the ROS_PACKAGE_PATH [rb_G8]
Shortcomings
does not provide an installation-from-source step (no make install)
packaging just packs the mixture of source and build (which contains much more files than necessary)
- compiler-flags apply uniformly to all target in a package [rb_DD2]
- does not allow multiple builds from a single source tree (due to in-source builds) [rb_DD3]
- does not support building external code on top of ROS packages (due to in-source builds) [rb_DD6]
- rosmake is very slow even when no or only a few changes have been made to the code [rb_DD3]
Catkin (ctk)
Design Goals
Same as rosbuild: rb_G1, rb_G2, rb_G3, rb_G5
Dropped:
rb_G4, release must be done for each package individually (as opposed to per stack), but new tools like bloom make the release process easier
- rb_G7, toolset supports a workspace where all configure and build steps are only partly isolated, and only by conventions.
- rb_G8, catkin workflow only builds packages in the (current) workspace
Additional Goals
- ctk_G3: Avoid long environment variables (some OS have size limit)
- ctk_G4: Cross compilation support
ctk_G5: packaging support
- ctk_G6: Less custom tool maintenance effort than rosbuild
ctk_G7: FHS compliant layout (REP 122)
- ctk_G8: use existing tools as most as possible, be as little as possible invasive, work well together with other projects
ctk_G9: installation-from-source support
- ctk_G10: allow multiple builds from a single source tree
- ctk_G11: use individual compiler/linker-flags per target
- ctk_G12: play nice with external code, allow build external code on top of ROS stuff
- ctk_G13: improve performance of make cycle
Design Decisions
- ctk_DD1: cmake invoked per workspace, thereby single invocation [rb_G3, ctk_G6, ctk_G13]
- ctk_DD2: Single build and release unit (stack) [ctk_DD1]
- ctk_DD3: out-of-source build [ctk_G4, ctk_G5, ctk_G10]
- ctk_DD4: build into centralized workspace build folder [rb_G1, ctk_G3, ctk_G6, ctk_G8]
ctk_DD5: while doing [ctk_DD3] we must be able to load Python code from build folder as well as from source folder, so a init.py script gets generated which looks in both location [rb_G1, rb_G5]
ctk_DD6: Support for make install target [ctk_G4, ctk_G5, ctk_G7, ctk_G8, , ctk_G12]
- ctk_DD7: Declare dependencies in stack manifests [rb_G2]
- ctk_DD8: Installed package folders are not subfolders of installed stack folders [ctk_DD1]
- ctk_DD9: Source folders also in environment for uncompiled resources [rb_G5]
- ctk_DD10: Use standard cmake find_package() to find stuff [ctk_G8]
- ctk_DD11: Generate PKG-config.cmake and PKG.pc code [ctk_G12]
- ctk_DD12: Python parsing of stack and custom creation of a build order for stacks [rb_G3, rb_DD5]
Alternatives Comparison
Traditional approach with cmake
Cmake is a widely used tool that offers some flexibility in how it is used. We describe here the most common usage.
building: cmake creates files in the project build space.
- The dependencies are resolved by searching several paths specified as environment variable as mentioned above. All generated files go into one build folder (and subfolders below it).
installing-from-source: cmake copies specific files (binaries and dedicated resources) to system or userland install spaces.
packaging: what is declared in installing-from-source targets gets packaged
In this approach, if source Project C depends on source Project B, and source project B depends on source project A, then the user must know this and build and install A, then build and install B, and then build and install C for a complete build.
- benefits:
- cleanly separated builds
- short environment variables, since only one specific paths under CMAKE_INSTALL_PREFIX need to be added to each environment variables
- limitations:
- users have to do a lot of error-prone repetitive work to build many projects.
- Uninstalling software must be managed by users, ad hoc approach practically makes uninstalling impossible (stow-like solutions allow uninstalling).
- The order for building the projects is chosen manually by the user.
- Necessary include directories and libraries to link are not passed but must be stated explicitly for all recursive dependencies.
- since all projects need to be installed before being able to run the application, the workflow for i.e. modifying a Python file is an extreme overhead (need to call make install again, which copies the resource to install-space).
- Makes development work on multiple versions of the same software very cumbersome, because in source overlays are not easy to setup with vanilla cmake.
rosbuild
rosbuild was created to make a different workflow from the traditional workflow easier. rosbuild also uses cmake, but wraps the cmake command and allows not to specify cmake install targets. rosbuild uses the in-source build space for both building and source-installing. rosmake uses custom code to invoke building in dependency projects in the right order, so the user does not need to remember doing this. rosbuild requires dependencies to be defined in special manifest files for packages and stacks. System dependencies are also defined in the manifests and can be resolved with rosdep. rosbuild also features a separation of stacks and packages, where packages are atomic build units, whereas stacks are atomic release units. Releasing also commonly implies packaging for stacks that are available through package managers.
- building: rosbuild+cmake creates files in the in-source build space (preventing ctk_G10). Compiler and Linker flags taken from manifests of dependency packages. The environment of the buildspace includes all projects and achieves rb_G1 and rb_G5.
- environment: Based on the ROS_PACKAGE_PATH, all stacks and packages in install and build locations can be part of the environment.
- package: all source and binaries are packaged as they are (e.g. c++ source) (prevents ctk_G7)
- benefits:
- [rb_DD5]: Users can call rosmake in one package, and automatically all dependency projects (and only those) are build first in the right order.
- Users can uninstall source packages by moving or deleting folders.
- Independent projects can be build in parallel.
- Users can avoid the pain of defining dependencies in cmake, need only a small subset of cmake commands, using the manifest file for the rest.
- System dependencies resolved with rosdep.
- Meta information in manifests about packages can be used for wiki / indexing.
- Stack/package relationship allows locking versions of packages together in a stack.
- Easy lookup of c++ source in apt-get installed stacks (source in the version that runs)
- For the main platform (Ubuntu), users only need to use and learn a small subset of cmake (e.g. no advanced target management).
- packages in the workspace with broken configuration do not impact the build of other packages
- Fool-proof reconfiguration and rebuild of all dependencies with the same command (preventing users from forgetting configuring / building in some package)
- limitations:
- Large environment variable ROS_PACKAGE_PATH needs to be maintained.
- In-source space becomes polluted.
- [ctk_G10]: Does not support multiple builds from a single source tree.
- Developers do not define resources to be installed, making it hard to package cleanly.
- Cross compilation is difficult.
- Non-rosbuild projects cannot depend on rosbuild projects.
- Requires manifest files in addition to cmake files.
- Manifest file version system inferior to that of cmake (no optional dependencies, no required version numbers).
- Requires maintenance of rosbuild, rosmake, rosdep toolsets.
- Requires compilation flags to be manually declared in manifest to be used by other rosbuild projects.
- Debian packages do not respect FHS standard.
- apt-get stacks bloated with non-essential files (c++ source).
- Compiler and linker flags used to build a binary in a project are often much more than required, as package dependency does not imply we need the compiler flags, and different cmake targets may just require a subset of flags.
- Compiler and linker flag exports in manifests must be specified for every platform, compiler, etc. else the package is not portable to those other systems.
- Long waiting time due to mostly unnecessay configure and build steps in dependencies
binaries do not end up in PATH, have to be executed using rosrun
catkin
catkin allows to build and install projects as in the traditional cmake workflow. The cmake macros however also offer a different workflow, which is the intended usage of catkin. In the intended usage, building for all workspace projects uses the workspace build space, the workspace build space is also part of the environment.
- build: cmake+catkin creates files in the workspace build space
- source-install: the workspace build space can already be in the environment, but an install command also moves selected files to system or userland install spaces. Thus, installed files can also be used from other workspaces.
- environment: system and userland install spaces, as well as workspace build space
- package: what is declared in install targets gets packaged
- benefits:
- All projects in the workspace can be build with a single command, paralelizing build more than cmake.
- Source-installing is not required, but possible.
- Small environment. Easiest cross-compiling and packaging.
- No in-source space pollution.
- Catkin macros reduce the effort of defining dependencies compare to vanilla cmake.
- Installation respects FHS filesystem standard.
- Compared to rosbuild, packaging is not bloated.
- Configuration steps that are similar for many packages are just executed once (e.g. find_package(xyz))
- Re-Configuration only happens when the user invokes it (compared to rosmake)
- Separation of configuration and build steps (avoid wasted time)
- limitations:
- Projects configure and build process could conflict with each other (see details in section below), as stacks share the same cmake namespace
- Removing files from the build folder is harder than with rosbuild (but possible)
- Uninstall not supported (unless special catkin logic added)
- The indexing information needs to be restructured for the ROS wiki, wiki contents explaining steps in terms of packages may become broken.
- No stack-package relationship means less grouping of projects.
- Broken stack/package relationship breaks all ros tools that relied on the assumption.
- Catkin requires flat workspace folder layout (some developers may not like the lack of structure) [update: This will be changed]
- Compared to rosbuild, more difficult for developer to access sources of installed code (in the version that is installed).
Some binaries can be installed in a global .../bin folder, meaning into the PATH, and can be executed without rosrun
- A single package with a broken configuration (e.g. missing dependencies) prevents the configuration of any other package in the workspace, even if the former package is not required by the user (who might not be know this)
On configuration & build speed
This is based on a post in the ROS buildsystem SIG.
Note this benchmark purpose was disturbed by bug https://code.ros.org/trac/ros/ticket/4036.
A quick benchmark to get an impression yielded these results. Those are for building electric and fuerte ros-base variant (ros and ros_comm stacks), which were based on catkin in fuerte.
Full compile:
Electric |
configure and build from scratch |
rosmake -a |
105.9s |
Fuerte |
configure from scratch |
cmake .. |
14.6s |
Fuerte |
build from scratch parallel |
make -j 8 |
19.2s |
Fuerte |
build from scratch |
make |
106.4s |
Noop compile
Electric |
configure and build again (noop) |
rosmake -a |
27.1s |
Fuerte |
build again (noop) |
make |
1.7s |
Fuerte |
build parallel (noop) |
make |
0.2s |
One file tweaked:
Electric |
configure and build after change |
rosmake -a |
26.8s |
Fuerte |
configure again after change |
cmake .. |
10.0s |
Fuerte |
build after change |
make |
2.1s |
Analysis by Tully:
The biggest difference I will note is that the noop build using a single cmake workspace is on the order of one second (less if you enable parallelization). And if you tweak a single file, it climbs up to 2 seconds. Whereas a noop build in rosmake takes 30 seconds. Also the actual full compile step takes about the same amount of time, if the fuerte system is not threaded. If it is threaded it takes one 5th the time.
A random comparison I'd note is that a full configure and build with a single workspace takes the same amount of time as a noop build using rosbuild. Also from profiling the rosmake builds, the minimum invocation time for cmake && make is about 0.3 to 0.5 seconds per package.
Causes for the differences:
- Causes that could (possibly) be fixed by changing rosmake:
A bug in rosmake caused make never to run with multiple jobs https://code.ros.org/trac/ros/ticket/4036
Catkin does not automatically invoke the reconfiguration step, rosbuild does (https://code.ros.org/trac/ros/ticket/4035)
- Catkin only configures / builds packages inside one workspace (not relevant for benchmark above)
- rosbuild does not distinguish between build and runtime dependencies, meaning some packages that could be build in parallel, aren't
- Causes that are inherent in the design differences:
- Similar configuration lookup steps for several packages are done just once in catkin (e.g. cmake find_package)
- Catkin can exploit parallelity based on make targets, not just packages
- Catkin can perform more c++ compilation in parallel (only linking needs sequential order)
- rosbuild requires many calls to rospack, many of which are avoided by catkin
On Catkin cmake conflicts
As the catkin design uses a single cmake process to build all processes in one catkin workspace, and uses a single build folder for all projects, collisions of names are possible. The extent to which this is possible, the plausibility of that happening and the effort to debug and fix such collisions is difficult to judge, so here are more technical details.
Build space conflicts
The suggested catkin workflow uses a single build forlder in the workspace, in which all build products (c++ executables, c++ libraries, generated files) are placed, in a structure that mirrors the FHS layout as the install target would create. This allows using the build space as if it were an install space easily.
- executable binaries
Executable binaries will either go into a common bin folder, or into another fhs compliant folder for binaries (e.g. lib/PKGNAME/libexec or libexec/PKGNAME). Name collisions can thus happen if two projects name an executable the same and put it into the common bin folder. Conversely, if two executables are to be delivered under different package names in e.g. lib, but with the same executable name, the catkin workspace will not allow building those two projects if they name the target the same, even though installation of the executables would work fine. In this case the target name must be prefixed to avoid that collision.
- libs and .pc
Libraries thus go into a common "lib" folder. Collisions for those are possible if two projects name their libraries the same. Such collisions would also happpen if the packages were installed using apt-get. If conflicting projects lie in the same workspace, cmake will fail with a useful error message for such cases (Duplicate target). If conflicting projects are developed by different teams, the conflict will appear anytime later. Generally this problem is not more constraining than for any Linux/Unix program development. .pc files go into lib/pkgconfig/PKGNAME.pc and are therefore conflict free.
- generated (non-binary) files, .cmake
Generated files go into a common share folder in the proposed FHS layout, and in there in a subfolder named after the package. This should allow avoiding namespace conflicts between projects. However if the developer makes a mistake in the definition of the target path, this can fail very late.
Cmake variable conflicts
A catkin workspace in the default suggestion acts like a single cmake project. Single Cmake projects have different variable scopes, the default scope for a variable is the directory of the CMakeLists.txt it is defined in. Such variables are safe from conflicts.
E.g.
set(myfeature 42)
Cmake also allows global variables and cached variables (which are global). Such variabls do conflict with each other, and typically in a silent way (meaning the developer does not get a warning or an error by cmake, just something bad happens and he has to find out what and how).
Caching is a useful feature for catkin users in order not to have to pass variable values on every invocation of cmake, e.g. cmake -Dmyfeature=42
E.g.
set(myfeature 42 CACHE STRING "description") get_filename_component(VarName FileName CACHE) option(CPACK_PACKAGES "Set to ON to build the packages. Requires cmake >2.4" OFF)
Such cached variables are also fine if users adhere to strict naming standards like using a prefix <packagename>_ for all such variables.
Any global variable is also okay to use if ALL projects ALWAYS set the variable itself before they use it. As cmake does not process projects concurrently, projects would not influence each other that way. However if a developer forgets this, side effects between projects can occur.
Several standard cmake commands use global or cached variables, such as:
find_program find_package find_library find_path
This means if two different catkin stacks used these functions with different options, the first one would win for the entire workspace, without warning or errors by cmake.
E.g.
# In project B find_package(Boost 1.40.0 EXACT COMPONENTS system) # In project A (Does not reproduce if both lines are used in the same directory) find_package(Boost 1.46.1 EXACT COMPONENTS system)
The first command in one catkin project will make cmake ignore the second command in another catkin project, and there will be no warning or error if the components are exactly the same.
Notice how in my version of catkin, projects later in lexical order are build earlier.
In this case cmake will check for the boost version for every new component, meaning if an earlier project demands a superset of Boost components of a second project, the second project boost version check will be ignored, despite of the EXACT flag.
Other quirks are that a project using variables like ${Boost_INCLUDE_DIRS} does not need to call find_package(Boost...) anymore, as long as some other project in the catkin workspace does (which means people will forget to call it).
[TF] This fails on my machine w/o 1.46 installed:
find_package(Boost 1.40.0 EXACT COMPONENTS system) find_package(Boost 1.46.1 EXACT COMPONENTS thread)
This fails because different components are used.
And this passes:
find_package(Boost 1.40.0 COMPONENTS system) find_package(Boost 1.39.0 REQUIRED COMPONENTS thread) message(STATUS "${Boost_LIBRARIES}")
Like this:
-- Boost version: 1.40.0 -- Found the following Boost libraries: -- system -- Boost version: 1.40.0 -- Found the following Boost libraries: -- thread -- /usr/lib/libboost_system-mt.so;/usr/lib/libboost_thread-mt.so
Command names from macros obviously also share the global name space. Therefore, e.g.
macro(mymacro) message(STATUS bla) endmacro()
makes this macro available to all later projects in the workspace, (e.g. people may forget to define macros they used in each project)
This also affects other commands relying on the global namespace such as
if (COMMAND mymacro) ...
include(CheckFunctionExists)
includes that standard module and its commands to ALL following projects in the workspace, meaning later projects in the workspace may use the command without calling include. Also files meant for inclusion may break if they were written with the assumption that they should only be included once per project.
CMake also includes a large number of standard and non-standard modules, for which the variable scoping is difficult to list (and which obviously also changes between versions).
Examples of a standard modules using cached variables (just to show even standard cmake modules use cached vars):
include(FindCURL) # defines cached vars: CURL_INCLUDE_DIR CURL_LIBRARY include(CPack) CPACK_...
Apart from standard cmake modules, there are several "non-standard" cmake modules which may be used by several projects.
In general, the naming collisions make it harder to have ROS Package forks of non-ROS projects, as the CMakelists.txt may have to be changed to also be catkin compliant.
E.g.:
orocos_kinematics_dynamics/orocos_kdl/CMakelists.txt visualization_common/ogre_tools/CMakelists.txt driver_common/dynamic_reconfigure/cmake/cfgbuild.cmake
are examples of a files where it is difficult to check whether they are catkin compliant or not.
Debugging
The problem of cmake conflicts is worsened by the fact that the really bad ones are prone only to surface in catkin workspace with many hundreds of packages, and to surface in a way that makes it difficult to diagnose whether and which package's CMakeLists.txt or which included 3rd party cmake files may be responsible for a failed build.
Feature map
(+ means better, - means worse)
Feature |
Vanilla CMake |
rosbuild |
Catkin |
installation-from-source |
+ |
- |
+ |
atomic distribution unit |
0 |
stacks |
stacks |
atomic build unit |
CMake project |
packages |
stacks |
machine-readable meta information |
0 |
manifest.xml + stack.xml |
stack.xml |
exporting build flags (cc/ld) to other packages |
+ generated |
- manual in manifest.xml |
+ generated |
importing build flags (cc/ld) from other packages |
- manual in CMakelists.txt as target_link_libraries() |
implicit (but bloated, not minimal), if exported, else broken |
+ semi-automatic, generates find_package() infrastructure based on catkin_project() arguments |
single command multi-project build |
parent project cmake |
rosmake |
workspace-level cmake |
install target |
+ (if provided) |
- |
+ (if provided) |
FHS compliant install layout |
+ (if install provided) |
- |
+ (if install provided) |
build without custom tools |
+ |
-- |
- |
run without custom tools |
+ |
-- |
+ |
cross compiling |
+ |
- |
+ |
isolated configuration |
+ |
+ |
- |
isolated build |
+ |
+ |
- |
fool-proof configure and build |
+ |
+ |
- |
multiple builds (e.g. Debug vs. Release) into separate folders |
+ |
- |
+ |
quick adding of custom source projects to environment |
- make install |
+ copy, rosmake |
+ copy, make |
quick removal of source projects from environment |
-- (stow) |
+ delete folder |
rm -rf build, make (ccache) |
build space |
in project |
in-source |
in-workspace |
recursive make |
-- |
+ rosmake ... |
+ make ... |
quick make of other stack/package |
-- |
+ rosmake stack |
make -C path/to/workspace targetname |
workspace folder layout |
Arbirtary |
Arbitrary |
flat list of stacks |
packaged sources |
No |
Always |
possible to provide separate package with sources |
Use cases
Installing ROS+stacks from source
A developer checks out the source of ROS core and several other stacks into a local folder. The user then runs a make-like command. As a result, the user is able to run ROS master and ROS nodes.
[JOQ] possible command sequence:
$ rosinstall ~/workspace http://rosinstall/yaml/file/url $ cd ~/workspace $ mkdir build $ cd build $ cmake .. $ make $ sudo make install prefix=/usr/local
[JOQ] How does the user uninstall? (sudo make uninstall would be nice)
- [TF] We can look at adding an uninstall target. The simple naive approach is relatively simple. It's a question of how complex do you want to get it.
- [JOQ] Should we tell the user to install somewhere else, instead?
- [TF] I would generally suggest that the user not install into /usr unless they are an admin trying to deploy. And even then /opt/ros is more standard for ROS. I personally have never installed into /usr, it's just a mess to clean up if things go wrong.
- [JOQ] If so, what else is needed to set up the environment properly?
- [TF] To leverage installed packages set CMAKE_PREFIX_PATH to /usr/local in your case above. With this set, any cmake invocation will correctly find packages installed as above whether catkin or pure cmake.
Adding a stack from source
A developer has a working ROS instance. The developer checks out the source of a stack, runs a build command. From then on, the newly-built stack is used (instead of another overlayed stack)
[JOQ] possible command sequence:
$ roslocate info stack-name | rosws merge - $ rosws update stack-name $ cd ~/workspace $ rm -rf build $ mkdir build $ cd build $ cmake .. $ make $ source ~/workspace/build/setup.bash
- [JOQ] How does catkin modify my environment so the commands work?
- [JOQ] Did the workspace setup.bash provide everything needed?
- [TF] catkin generates the setup.*sh to set all your required paths.
- [JOQ] Did the workspace setup.bash provide everything needed?
[JOQ] Does this work for "wet" stacks, like common_msgs?
- [TF] Yes, with my modification of moving the source of the setup to the bottom instead of the middle.
Viewing source of installed package for debugging
Example: A developer wants to code a node that is somewhat similar to a node that he can run, and he wants to see the code of that rather than reinvent the wheel. Other example: the developer suspects a bug in roscpp code and wants to read the code. In this example, it is also crucial that the developer sees the code that runs, not just the latest code in a repository.
[TF] Possible process flows:
Binary installation:
apt-get source ros-fuerte-mypackage ls ros-fuerte-mypackage
Source installation:
roscd mypackage ls
Removing a previously added-from-source stack
A developer has a working ROS instance. The developer runs one or more commands. As a result, the artefact from the stack to be removed are no more used in that environment.
[JOQ] possible command sequence:
$ cd ~/workspace $ rosws remove stack-name $ rm -rf stack-name build $ source ./setup.bash $ mkdir build $ cd build $ cmake .. $ make
Cross compiling stacks
A developer creates binaries to run on several alien architectures. With Catkin, the catkin workspace needs to be configured once for the cross-compilatio builds. The build process then creates all artifact within a single folder.
If catkin used isolated configuration processes per package, then the configuration step would have to be executed and monitored individually for many packages.
If catkin used isolated build folders per package, the build step would generate many such folders which would be harder to inspect and clean up manually.
Daily work
A developer has a workspace with several packages, that are built against a ROS installation. The developer regularly does changes in several packages and rebuilds several of them.
catkin
# make changes $ make
The duration of make is essential, as this is called very frequently in the daily work. Unnecessary waiting times lead to developers taking error-prone shortcuts.
Packaging stacks
A developer packages a stack or package for Debian/Ubuntu apt-get Fedora
[JOQ] what is the relevance of this use case to catkin? Isn't it a bloom packaging issue?
Example:
Dave the developer would like to make a Debian package for his ROS based software daves_inspection_system for release. He's hoping to make his release also palatable to users on limited storage targets.
Dave's workspace/repository contains several packages (Currently a stack) that are targeted at several different platforms, for instance, daves_inspection_system_OCU for the OCU, and the daves_inspection_system_robot for the robot itself.
daves_inspection_system_OCU has many more dependencies than daves_inspection_system_robot and so he would like to package them separately.
daves_inspection_system_robot depends on laser_geometry, but Dave is worried about pulling in PCL as a depencency because laser_filters shares a stack. As such he would like to depend on ROS-packages rather than ROS-stacks.
Dave also hopes that the dev and run-time dependencies are separate not only in ROS, but also in the official Deb packages so that he can keep his separate too and avoid pulling in the full gcc toolchain.
Proposed solution:
Separate the set of packages into multiple stacks.
[JOQ] Dave needs to understand dependencies better. Relatively few problems of this type can be solved by merely dividing packages into separate stacks. Consider the case of dynamic_reconfigure (already a unary stack), which has a <rosdep name="wxpython">, pulling in large GUI dependencies. That package needs to be reorganized, which is going to affect API compatibility. The only way to avoid problems like that is to think clearly about dependencies in the initial design. Packages and stacks have nothing to do with this problem. It can happen within a single source file, if the programmer is not careful.
Discovering stack / package dependencies using the dependency graph
A user wants to see how packages are related to each other, other than being dependent.
Wrap 3rd party libraries inside a package
For some developers that do not want to bother getting a 3rd party library released properly, they have been simply downloading and building libraries inside a ROS package. They would like to be able to continue doing so.
Suggestions
Reviewers can state suggestions here, discussion should happen in the ROS buildsystem SIG, with the results to be merged into this section by the reviewers.
AndrewSomerville
Additional catkin goal:
- Support packaging dev, run-time, and doc separately
- [DT] Providing run-time, dbg-symbols and doc separately should be possible. The difference betweeen dev and runtime looks minimal (only removing header?) when no extra information is available. What do you expect to be stripped from run-time over dev packages?
- [JOQ] Doesn't this mainly affect the packaging tools, not catkin?
- [TF] Yeah, I think we can not discuss this too much further and talk about it in the [bloom] design.
[JOQ] Here is a small but typical example. The most interesting subtlety is that the run-time ABI symlink (libdc1394.so.22) is with the library, while the compile-time API symlink (libdc1394.so) is only available with the -dev package, along with the static library (libdc1394.a) and pkg-config file.
$ dpkg -L libdc1394-22 /. /usr /usr/lib /usr/lib/libdc1394.so.22.1.5 /usr/share /usr/share/doc /usr/share/doc/libdc1394-22 /usr/share/doc/libdc1394-22/copyright /usr/share/doc/libdc1394-22/changelog.Debian.gz /usr/lib/libdc1394.so.22 $ dpkg -L libdc1394-22-dev /. /usr /usr/lib /usr/lib/pkgconfig /usr/lib/pkgconfig/libdc1394-2.pc /usr/lib/libdc1394.a /usr/include /usr/include/dc1394 /usr/include/dc1394/linux /usr/include/dc1394/linux/capture.h /usr/include/dc1394/linux/control.h /usr/include/dc1394/vendor /usr/include/dc1394/vendor/avt.h /usr/include/dc1394/vendor/pixelink.h /usr/include/dc1394/vendor/basler.h /usr/include/dc1394/vendor/basler_sff.h /usr/include/dc1394/dc1394.h /usr/include/dc1394/types.h /usr/include/dc1394/camera.h /usr/include/dc1394/control.h /usr/include/dc1394/capture.h /usr/include/dc1394/video.h /usr/include/dc1394/format7.h /usr/include/dc1394/utils.h /usr/include/dc1394/conversions.h /usr/include/dc1394/register.h /usr/include/dc1394/log.h /usr/include/dc1394/iso.h /usr/share /usr/share/doc /usr/share/doc/libdc1394-22-dev /usr/share/doc/libdc1394-22-dev/README /usr/share/doc/libdc1394-22-dev/AUTHORS /usr/share/doc/libdc1394-22-dev/README.Debian /usr/share/doc/libdc1394-22-dev/copyright /usr/share/doc/libdc1394-22-dev/NEWS.gz /usr/lib/libdc1394.so /usr/share/doc/libdc1394-22-dev/changelog.Debian.gz
Thibault Kruse
- A wrapper to cmake so that we control user experience, e.g. providing rosmake --help, and mapping rosmake --clean or --purge to "rm -rf build".
Possibly have cmake target which generates "package" manifest dependency information from cmake variables, if manifest.xml do not contain dependency information anymore (or get deleted). See http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/audio/festival/README.html The page is auto-generated, the section "This package requires the following package(s) to build" is made from Makefile variables.
This does not imply parsing the cmake files, but instead telling cmake to dump variables into a file. The information would only be available after cmake invocation, but that would still be better than no access to the information at all. Also see http://www.cmake.org/Wiki/CMake_FAQ#How_can_I_generate_a_source_file_during_the_build.3F
- Allow hierarchic workspace layouts would help devellopers setup their environment as their brain works. This will have a certain negative performance impact for building, but speed up the time users need to find their files. The goals is this:
- If there is a large team working on a huge project, and some team members work on stack1-stack20, and another team works on stack21-stack40, and both teams need a workspace with stack1-stack40, then it help for introducing new team members if the workspace layout puts stacks under e.g. a project name folder.
- [DT] I would suggest to use two workspace in that case, one with the stuff the people need but not work on, and one with the other half of stacks.
- Also some people talk about converting rosbuild stacks into n unary stacks for each of their packages. The overview of the workspace will then be much worse as the number of subfolders will increase.
- If there is a large team working on a huge project, and some team members work on stack1-stack20, and another team works on stack21-stack40, and both teams need a workspace with stack1-stack40, then it help for introducing new team members if the workspace layout puts stacks under e.g. a project name folder.
- Provide source packages to install (see apt-src)
- Invoke cmake/make on build groups individually, in topological order.
- Generally, the whole concept of invoking cmake/make just one per workspace seems like broken design to me, done for the wrong reasons. Supporting people who do releases and cross-compilation is a noble goal, but the tool to do that should not be the tool that novice developer have to use. That's a bit like selling a familiy car with the cockpit of an airbus.
Instead, I would suggest that cross-compiling and releasing rests on a separate tool like a jhbuild fork (https://live.gnome.org/Jhbuild/), and that for developer we stick with a rosbuild-like approach of keeping cmake namespaces and build spaces isolated. I do not mean to stick with rosbuild, I just mean to stick with calling cmake/make per build unit. This can well be all cmake/catkin per stack, with the additional effort of the wrapper to provide a customized environment per build unit. As the use case for developers does not involved hundreds of stacks, the size of env variables is negligible. The make cycle duration is on the one hand just the time it takes to make things cleanly. On the other hand, it is possible to introduce the concept of a larger build group than a stack, like "ros_base_variant" on which cmake/make can be called as a whole. This could be a single workspace entry.
- [DT] This suggestions has the following side effects:
- separate build directories for each stack, this mean in order to achieve rb_G1 the environment must include several folders from each build space which vialoates ctk_G3
- since stacks are build sequentially the goal ctk_G13 is not achieved
- If each workspace only contains a single stack we achieve exactly what you describe, multiple separate CMake builds (with all the cons), so the user can freely choose if he wants to put multiple stacks in one workspace
- The assumption that tthe number of stacks build at the same time is wrong, i.e. on the build farm a pre-release requires that all stacks depending on a to-be-released stack are build from source, this can easily grow to much more than hundreds
- [TK] yes, I am saying catkin goals ctk_G3 and ctk_G13 should be dropped. Those are not relevant enough for ROS developers. I believe developers will not understand that they need these goals more than the clean separation of builds (both in namespace and build space). If each workspace contains just one stack, the make performance and effort to make N stacks is much worse than rosbuild. Also it is up to the user to organize the workspaces and call make in the right order, which is a nightmare. And as I said, on a build farm, there is no reason to use catkin if a clean solution like jhbuild exists that calls make install for all stacks, which does not require long environment variables, as all build against the install space.
- Generally, the whole concept of invoking cmake/make just one per workspace seems like broken design to me, done for the wrong reasons. Supporting people who do releases and cross-compilation is a noble goal, but the tool to do that should not be the tool that novice developer have to use. That's a bit like selling a familiy car with the cockpit of an airbus.
Lorenz Mösenlechner
- Clarify how 3rd party libraries should be integrated. 3rd party libraries that are wrapped in a ros package and installed locally in the package dir are a pretty common and very useful use case of rosbuild. Many libraries used in research don't have a debian package and researchers might not be able or not want to create such a debian package to be distributed over WG's package repository, e.g. because of copyright issues, etc. Being able to just call rosmake to get everything, including 3rd party libraries, built and installed without messing up the system or requiring root was one important feature of rosbuild. To get rid of rosbuild, I believe a solution has to be found to not require the user to manually build and install such 3rd party dependencies as it was in pre-ros times.
- [TF] You can do this inside a cmake package just as you have inside rosbuild. We probably should look at what helper functions might make this easier, as we had inside rosbuild. I've added it as a use case above.
- [LM] I see that it is easy to have the download, build and even the install wrapped in the CMakeLists. But how would accessing the library at build time work, i.e. without an explicit make install?
- [TF] You can either write your own find_package cmake modules, or use the catkin macros to autogenerate them for you like the native packages.
[LM] So how exactly would that work? I create a cmake file that downloads, extracts and builds a library, right? What would be the install prefix for that library? If I used CMAKE_INSTALL_PREFIX and execute make install when the user installs, the library would be usable only after that make install, i.e. it wouldn't be available in the workspace. If I first install it locally, I needed to provide an additional install target. If the library now substitutes the install prefix in some files, e.g. if it generates a pkg-config file. the path would still point to the build dir, not the actual install prefix.
- [TF] If the library only supports usage after installation, we cannot support both build space and install space operation unless it recompiles inside the install target. (The same is true of the legacy system, just the legacy system never tried to install so we just installed into the build space.) You can install into the build space as in the old system, and your package won't be installable. If it's a cmake based library, it likely could be integrated into the overall build by including it as a subdirectory after download. But in general, we can't do more than the library's build system supports. So if it doesn't support build space usage, then we can't.
[JB] At least for user-space only builds, CMake has a bunch of built-in tools for pulling in 3rd party dependencies, what they call "external projects." The ExternalProject_Add() function has built-in arguments for downloading from various VCS types, applying patches, and configuring right in the package location. Additionally, this function could potentially be used in conjunction with Catkin to easily redistribute packages that are not in a platform's normal distribution channels.
[DS] The simple solution at the moment would be to catkin install a 3rd party stack to the installspace. Then download, catkin build your code stack in a different workspace. This workspace's buildspace and installspace both, can then utilise the installed 3rd party packages. e.g.
# Install a base set of ros stacks > sudo apt-get install ros-fuerte-desktop > mkdir ws; cd ws # Configure two separate workspaces, one for 3rd party, one for your code > rosws init --catkin 3rdparty > rosws init --catkin src > cd 3rdparty; rosws merge my_3rd_party_stacks.rosinstall; rosws update > cd src; rosws merge my_src_stacks.rosinstall; rosws update # Install the 3rd party workspace, then develop in the other > mkdir build_3rdparty; cd build_3rdparty; cmake ../3rdparty; make; make install > mkdir build; cd build; cmake ../src; make (optionally make install)
An alternative way would be to put your 3rdparty and src stacks in the same rosinstall and workspace and make use of CATKIN_BLACKLIST_STACKS and CATKIN_WHITELIST_STACKS in two parallel builds.
To create your 3rdparty packages, you would have to use cmake. Most 3rdparty packages with rosbuild traditionally used make, which wasn't a good solution because you couldn't pass in cmake flags to the build that you used for the rest of your packages. You can do this in cmake though, which is what you'd have to do for catkin 3rdparty packages. I used to do this for an embedded opencv rosbuild - see the CMakeLists.txt in eros_opencv. It would be simple to build some cmake macros to facilitate that, or as JB mentioned, utilise the cmake external project infrastructure.
Downside is that the above is a 2-step process (currently with rosbuild it's a 1-step process).
[DS] The complicated solution I think would be to add some framework to rosdep so that it could identify a source rosdep and have a structured way of downloading a 'source rosdep package', building, installing and uninstalling it.