-
Notifications
You must be signed in to change notification settings - Fork 2
Architecture Model
The layered architecture pattern establishes a topological and hierarchical organization of the software components. Each layer gathers software components with similar features and abstraction level, and uses a well-defined communication interface, which allows the replacement of specific components or the entire layer, keeping the system behavior. This property reinforces the modularity, scalability, reusability, and maintainability of the architectural style. ROS supports the communication interface between components with Publish/Subscribe pattern for messages passing.
This section lists the core components for the autonomous vehicle:
- Sensing
- RGB camera
- Depth camera
- Segmentation camera
LIDAR- GPS
- Map
- Inertial Measurement Unit (IMU)
- Localization
- Position
- Orientation
- Perception
Traffic Sign Detection- Traffic Light Detection
- Static Obstacle Detection
- Dynamic Obstacle Detection
- Collision Detection
Lane Detection
- Path Planning
- Global Planner
- Local Planner
- Decision Making
- Control
- Lateral Control (Steering)
- Longitudinal Control
- System Management
- Human Machine Interface (HMI)
- Driving Mode Management
- Fault Management
- Power Management
In the following, different architecture models are shown. The models include the relevant components elaborated from previous issues and literature.
Drafts (we use different architecture)
-
System-management layer monitors all components of the system, detects and identifies faults or abnormal behaviors, and launches recovery protocols in case of faults and unexpected conditions
- Human-vehicle interface layer provides the graphical tools that access the system, visualizing the components information and feedback, and also request specific missions, such as destination to be reached by the vehicle.
- Sensing layer makes data from sensors available to other components of the system
-
Perception layers collects information and extracts relevant knowledge from the environment
- Localization component estimates the position and orientation of the vehicle in a specific coordinate system
-
Path planning layer finds a path from the current position of the vehicle towards its destination and Decision-making layer decides on the behaviour of the vehicle according to both driving situations and traffic laws
- Global/route planning computes the overall route between the current position to the destination
- Behavior generation (a.k.a. decision making) makes tactical decisions on the actions of the vehicle, e.g., maneuvers to be performed. Some of the algorithms are listed below in State decision
- Motion planning/Local Planning creates local, obstacle free and dynamic feasible paths to be tracked by low-level controllers in the control layer
- Control layer - ability to execute the planned actions that have been generated by the higher level processes, like generate brake, throttle and steering angle commands, by dividing the movement into lateral and longitudinal controllers, which calculate actions considering kinematic and dynamic constraints of the movement.
- Draft 1 and 2: The absence/presence of the sensor-fusion layer. The sensor fusion layer in draft 2 is responsible for the collection of the data from the different sensors and dividing the data flow to the localization layer and perception layer. The fusion layer has also an preprocessing of the data to generate an 360°-view. In draft 2 there is no connection from sensing layer to planning layer.
- Draft 2 and 3: In draft 3 the system management layer has only one component, that controlls the perception (controlls the traffic regulation). In general the perception has two parts enviromental perception and self-localization. One other difference is the structure of the planning layer and the reception of the data from previous layers.
- Draft 3 and 4: The absence/presence of human-vehicle layer, that simply defies visualisation.
The logical system overview provides insight in the intercommunication between our self-driving components. It outlines the general dataflow from sensor information to actionable driving signals.
In contrast, the technical system overview focusses on the environment required to run the project's self-driving simulation. We're heavily relying on Docker to abstract the ROS complexity away. This brings several benefits such as enhanced team collaboration through simple dev machine setups, GitHub CI/CD pipelines and most importantly more reliable, well defined components.
The original architecture used to be a pure Docker approach, theoretically capable of online reinforcement trainings on large GPU clusters (we didn't have the resources to test). But running the CARLA simulator inside a Docker container did actually cause considerable performance issues, so we decided to host the simulator directly on the host machine and connect all network traffic to localhost via the Docker 'host' network. In case a newer version fixes the performance issues, it might be beneficial to switch back to the pure Docker approach as it allows for more flexibility.
Issues with ROS:
- ROS build process is really complicated, it's almost impossible to get everything right when hacking all the ROS catkin commands into the console manually
- ROS deployment is a real mess, there are binaries all over the place that need to be registered into the bash console, etc.
- it's hard to decouple the components from each other -> modularity without modules being independently deployable is useless!!!
Upsides of ROS:
- everything about ROS is properly Linux-scriptable
- ROS nodes can be connected across host machine boundaries via TCP -> loose coupling
Solution with Docker:
- deploy each ROS node as a "service" running inside a Docker container (each container has a single purpose)
- glue the containers together by launching a container serving as ROS master (execute roscore as entrypoint)
- automate all this container launching using one or more docker-compose files
Issues with Docker:
- having to build images locally is really annoying and takes lots of time before finally launching the image
- various versions of the source code for each component need to be managed to work together
- this kind of version management can get really messy quickly
Solution with GitHub Workflows:
- each commit into GitHub triggers a build pipeline to run the CI/CD with GitHub workflows
- purpose of the CI/CD pipeline: determine whether the software is releasable
- pipeline steps:
- take the source code and build it into Docker images
- run all tests against the Docker images to make sure everything is working properly
- this can include unit tests, integration tests, performance tests, etc. (-> ensure that the system is "good enough")
- in case all checks pass, the Docker image can be released by pushing it to a registry
- in case of GitHub workflows, the Docker images can be published into GitHub's Docker registry
- that means, nobody needs to actually build the Docker images locally, they can be pulled from GitHub instead
- if a single test fails, none of the images get published until the system fulfills the requirements
Issues with Multi-Component Scenario Setups:
- launching a scenario is really fragile, depends largely on the host system
- non-automated launch procedures can fail for reasons like not having configured the components correctly, etc.
Define Scenarios as Docker-Compose:
- all components can be defined in a infrastructure-as-code like style
- it forces developers to specify everything the system needs very explicitly, so it can run on any PC
- the Docker images are built and evaluated server-side by GitHub, so just pull the latest version of each
Top-Level:
- scenarios
- scenario 1
- scenario 2
- ...
- components
- integration-tests
- node 1
- node 2
- ...
- README.md
Scenario:
- config
- some_settings.yml
- some_launchfile.launch
- ...
- docker-compose.yml
- README.md
Dockerized ROS Node:
- node
- common ROS node file structure ...
- Dockerfile
- requirements.txt
- ros_entrypoint.sh
- README.md
ROS Node Structure:
- lauch
- <node_name>.launch
- srv
- some CMake project for compiling a custom *.srv message ...
- src
- <node_name>
- main.py
- __init__.py
- more python code ...
- tests
- <test_node_name>
- __init__.py (empty)
- conftest.py (empty)
- pytest.ini (empty)
- test_my_module.py
- more pytest files
- __init__.py
- <test_node_name>
- <node_name>
- CMakeLists.txt
- package.xml
- setup.cfg
- setup.py
Collaboration:
- Simple System Setup
- access to GPU via NVIDIA Docker
- fully-automated build / launch procedure
- GitHub CI/CD pipelines
Portability / Scalability:
- Local Development
- Remote Performance Testing
- Live-Training on GPU Cluster (A3C RL)
- Development of Autonomous Car—Part I
- Development of Autonomous Car—Part II
- Winners of First CARLA Autonomous Driving Challenge
- Motion Planning for Automated Vehicles in Mixed Traffic
- Perception, Planning, Control, and Coordination for Autonomous Vehicles