This repository contains scripts and GitHub Actions (GHA) YAML workflows for building, testing and deploying OCI images (aka Docker | Podman images) including open source electronic design automation (EDA) tooling. All of them are pushed to hub.docker.com/u/hdlc.
Do you want to improve this page? Please edit it on GitHub. |
Tools and images
Tool |
Image |
Included in |
||||
---|---|---|---|---|---|---|
Package |
Ready-to-use |
I |
F |
P |
Others |
|
- |
- |
F |
- |
- |
||
- |
- |
F |
- |
- |
||
I |
F |
- |
|
|||
I |
F |
- |
- |
|||
- |
- |
I |
F |
- |
|
|
- |
- |
- |
- |
- |
||
I |
- |
P |
|
|||
I |
- |
- |
- |
|||
- |
- |
- |
- |
P |
- |
|
I |
- |
- |
|
|||
- |
- |
- |
- |
- |
||
- |
- |
F |
- |
- |
||
- |
- |
- |
- |
|||
- |
- |
F |
- |
- |
||
I |
F |
- |
|
|||
- |
- |
F |
- |
- |
Images including multiple tools:
To Do:
Context
This project started in GitHub repository ghdl/ghdl (which was named tgingold/ghdl back then). The main purpose was testing GHDL on multiple GNU/Linux distributions (Debian, Ubuntu and Fedora), since Travis CI supported Ubuntu only and Docker. For each target platform, two images were used, one for building and another one for testing.
Later, most of the Docker related sources were split to repository ghdl/docker. There, some additional simulation tools were added, such as VUnit and GtkWave. Images including the ghdl-language-server were also added. When synthesis features were added to GHDL, and since it provides a plugin for Yosys, tools for providing a complete open source workflow were requested. Those were nextpnr, icestorm, prjtrellis, SymbiYosys, etc.
At some point, ghdl/docker had as much content related to non-GHDL tools, as resources related to the organisation. At the same time, SymbiFlow aimed at gathering open source projects for providing an integrated open source EDA solution. However, it did not have official container images and help was wanted. This repository was intially created for moving all the tools which were not part of GHDL, from ghdl/docker to symbiflow/containers. However, apart from adding known Verilog tools, the scope was widened. Hence, the repository was published in hdl/containers.
Usage
Official guidelines and recommendations for using containers suggest keeping containers small and specific for each tool/purpose. That fits well with the field of web microservices, which communicate through TCP/IP and which need to be composed, scaled and balanced all around the globe.
However, tooling in other camps is expected to communicate using a shared or local filesystem and/or pipes; therefore, many users treat containters as lightweight virtual machines. That is, they put all the tools in a single (heavy) container. Those containers are typically not moved around as frequently as microservices, but cached on developers' workstations.
In this project, both paradigms are supported; fine-grained images are available, as well as all-in-one images.
Fine-grained pulling
Ready-to-use images are provided for each tool, which contain the tool and the dependencies for it to run successfully. These are typically named hdlc/<TOOL_NAME>
. Since all of them are based on the same root image, pulling multiple images involves retrieving a few additional layers only. Therefore, this is the recommended approach for CI or other environments with limited resources.
-
There is an example at ghdl-yosys-blink: Makefile showcasing how to use this fine-grained approach with a makefile.
-
At marph91/icestick-remote the CI workflow for synthesis uses this approach.
-
Moreover, PyFPGA is a Python Class for vendor-independent FPGA development, which runs GHDL, Yosys, etc. in containers.
These images are coloured GREEN in the [Graph]. |
All-in-one images
Multiple tools from fine-grained images are included in larger images for common use cases. These are named hdlc/<MAIN_USAGE>
. This is the recommended approach for users who are less familiar with containers and want a quick replacement for full-featured virtual machines. Coherently, some common Unix tools (such as make or cmake) are also included in these all-in-one imags.
-
The CI workflow in tmeissner/formal_hw_verification uses image
hdlc/formal:all
along with GitHub’s 'Docker Action' syntax (see docs.github.com: Learn GitHub Actions > Referencing a container on Docker Hub).
These images are coloured BROWN in the [Graph]. |
Tools with GUI
By default, tools with Graphical User Interface (GUI) cannot be used in containers, because there is no graphical server. However, there are multiple alternatives for making an X11 or Wayland server visible to the container. mviereck/x11docker and mviereck/runx are full-featured helper scripts for setting up the environment and running GUI applications and desktop environments in OCI containers. GNU/Linux and Windows hosts are supported, and security related options are provided (such as cookie authentication). Users of GTKWave, nextpnr and other EDA tools will likely want to try x11docker (and runx).
USB/IP protocol support for Docker Desktop
USB/IP protocol allows to pass USB device(s) from server(s) to client(s) over the network. As explained at kernel.org/doc/readme/tools-usb-usbip-README, on GNU/Linux, USB/IP is implemented as a few kernel modules with companion userspace tools. However, the default underlying Hyper-V VM machine (based on Alpine Linux) shipped with Docker Desktop (aka docker-for-win/docker-for-mac) does not include the required kernel modules. Fortunately, privileged docker containers allow to install missing kernel modules. The shell script in usbip/
supports customising the native VM in Docker Desktop for adding USB over IP support.
# Build kernel modules: in an unprivileged `alpine` container, retrieve the corresponding
# kernel sources, copy runtime config and enable USB/IP features, build `drivers/usb/usbip`
# and save `*.ko` artifacts to relative subdir `dist` on the host.
./run.sh -m
# Load/insert kernel modules: use a privileged `busybox` container to load kernel modules
# `usbip-core.ko` and `vhci-hcd.ko` from relative subdir `dist` on the host to the
# underlying Hyper-V VM.
./run.sh -l
# Build image `vhcli`, using `busybox` as a base, and including the
# [VirtualHere](https://www.virtualhere.com) GNU/Linux client for x86_64 along with the
# `*.ko` files built previously through `./run.sh -m`.
./run.sh -v
For manually selecting configuration options, building and inserting modules, see detailed procedure in gw0/docker-alpine-kernel-modules#usage. |
Modules will be removed when the Hyper-V VM is restarted (i.e. when the host or Docker Desktop are restarted). For a permanent install, modules need to be copied to /lib/modules in the underlying VM, and /stc/modules needs to be configured accordingly. Use $(command -v winpty) docker run --rm -it --privileged --pid=host alpine nsenter -t 1 -m -u -n -i sh to access a shell with full permissions on the VM.
|
Example session
How to connect a Docker Desktop container to VirtualHere USB Server for Windows.
-
Start
vhusbdwin64.exe
on the host -
Ensure that the firewall is not blocking it.
# Start container named 'vhclient'
./run.sh -s
# List usb devices available in the container
./run.sh -e lsusb
# LIST hubs/devices found by vhclient
./run.sh -c "LIST"
# Manually add to the client the hub/server running on the host
./run.sh -c "MANUAL HUB ADD,host.docker.internal:7575"
sleep 10
./run.sh -c "LIST"
# Use a remote device in the container
./run.sh -c "USE,<SERVER HOSTNAME>.1"
sleep 4
# Check that the device is now available in the container
./run.sh -e lsusb
There is an issue/bug in Docker Desktop (docker/for-win#4548) that prevents the container where the USB device is added from seeing it. The workaround is to execute the board programming tool in a sibling container. For example: docker run --rm --privileged hdlc/prog iceprog -t .
|
Alternatives
Using VirtualHere is the only solution we could successfully use in order to share FTDI devices (icestick boards) between a Windows 10 host and a Docker Desktop container running on the same host. However, since the USB/IP protocol is open source, we’d like to try any other (preferredly open and free source) server for Windows along with the default GNU/Linux usbip-tools. Should you know about any, please let us know! We are aware of cezuni/usbip-win. However, it seems to be in very early development state and the install procedure is quite complex yet. |
Serial (COM) devices can be shared with open source tools. On the one hand, hub4com from project com0com allows to publish a port through a RFC2217 server. On the other hand, socat
can be used to link the network connection to a virtual tty
device.
HOST CONTAINER
--------------------------- -------------------------------------
USB <-> | COMX <-> RFC2217 server | <-> network <-> | socat <-> /dev/ttySY <-> app/tool |
--------------------------- -------------------------------------
REM On the Windows host
com2tcp-rfc2217.bat COM<X> <PORT>
# In the container
socat pty,link=/dev/ttyS<Y> tcp:host.docker.internal:<PORT>
It might be possible to replace hub4com
with pyserial/pyserial. However, we did not test it.
Contributing
This repository provides a set of base images for building and for runtime: base.dockerfile
. All the images in the ecosystem are based on these:
Then, for each project/tool there are a dockerfile, a GitHub Actions workflow, and one or more test scripts. Those are used for:
-
Tools are built using
hdlc/build
images. -
Package images based on
scratch
(and/or other reusable packages) are produced. -
Ready-to-use images based on the runtime base image (
hdlc/build:base
) are produced. -
Ready-to-use images are tested before uploading.
In some dockerfiles/workflows, Package images are created too. Those are based on scratch
and contain pre-built assets. Therefore, they are not really useful per se, but meant to be used for building other. In fact, multiple tools are merged into ready-to-use images for common use cases (such as hdlc/impl
, hdlc/formal
or hdlc/prog
).
Before working on adding or extending the support for a tool, please check the issues and pull requests; open an issue or let us know through the chat. Due to its distributed nature, someone might be working on that already! |
Currently, many projects don’t use containers at all, hence, all images are generated in this repository. However, the workload is expected to be distributed between multiple projects in the ecosystem. |
Graphs
Understanding how all the pieces in this project fit together might be daunting for newcomers. Fortunately, there is a map for helping maintainers and contributors traveling through the ecosystem. Subdir graph/
contains the sources of directed graphs, where the relations between workflows, dockerfiles, images and tests are shown.
(Graphviz)'s digraph
format is used, hence, graphs can be rendered to multiple image formats. The SVG output is shown in Figure 1 describes which images are created in each map. See the details in the figure corresponding to the name of the subgraph: Base (Figure 2), Sim (Figure 3), Synth (Figure 4), Impl (Figure 5), Formal (Figure 6). Multiple colours and arrow types are used for describing different dependency types. All of those are explained in the legend: Figure 7.
Package images
Each tool/project is built once only in this image/container ecosystem. However, some (many) of the tools need to be included in multiple images for different purposes. Moreover, it is desirable to keep build recipes separated, in order to better understand the dependencies of each tool/project. Therefore, hdlc/pkg:*
images are created/used (coloured BLUE in the [Graph]). These are all based on scratch
and are not runnable. Instead, they contain pre-built artifacts, to be then added into other images through COPY --from=
.
Since hdlc/pkg:*
images are not runnable per se, but an intermediate utility, the usage of environment variables PREFIX
and DESTDIR
in the dockerfiles might be misleading. All the tools in the ecosystem are expected to be installed into /usr/local
, the standard location for user built tools in most GNU/Linux distributions. Hence:
-
PREFIX
should typically not need to be modified. Most of the tools will default toPREFIX=/usr/local
, which is correct. Yet, some tools might default to/
or/usr
. In those cases, setting it explicitly is required. -
DESTDIR
must be set to an empty location when callingmake install
or when copying the artifacts otherhow. The content of the corresponding package images is taken from that empty location. Therefore, ifDESTDIR
was unset, the artifacts of the tool might potentially be mixed with other existing assets in/usr/local
. In most of the dockerfiles,/opt/TOOL_NAME
is used as the temporary empty location.
Despite the usage of these variables being documented in GNU Coding Standards, DESTDIR
seems not to be very used, except by packagers. As a result, contributors might need to patch the build scripts upstream. Sometimes DESTDIR
is not supported at all, or it is supported but some lines in the makefiles are missing it. Do not hesitate to reach for help through the issues or the chat!
Build
A helper script is provided in .github/bin/
to ease building images, by hiding all common options:
-
dockerBuild IMAGE_NAME DOCKERFILE_NAME [TARGET_STAGE]
-
IMAGE_NAME: image name without
hdlc/
prefix. -
DOCKERFILE_NAME: name of the dockerfile to be used.
-
(optional) TARGET_STAGE: name of that target stage in the dockerfile.
-
Inspect
All ready-to-use images (coloured GREEN or BROWN in the [Graph]) are runnable. Therefore, users/contributors can run containers and test the tools interactively or through scripting. However, since hdlc/pkg
images are not runnable, creating another image is required in order to inspect their content from a container. For instance:
FROM busybox
COPY --from=hdlc/pkg:TOOL_NAME /TOOL_NAME /
In fact, .github/bin/dockerTestPkg
uses a similar dockerfile for running *.pkg.sh
scripts from test/
. See Test.
Alternatively, or as a complement, wagoodman/dive is a lightweight tool with a nice terminal based GUI for exploring layers and contents of container images. It can be downloaded as a tarball/zipfile, or used as a container:
docker run --rm -it \
-v //var/run/docker.sock://var/run/docker.sock \
wagoodman/dive \
hdlc/IMAGE[:TAG]
hdlc/pkg:yosys
with wagoodman/dive.Test
There is a test script in test/
for each image in this ecosystem, according to the following convention:
-
Scripts for package images,
hdlc/pkg:TOOL_NAME
, are namedTOOL_NAME.pkg.sh
. -
Scripts for other images,
hdlc/NAME[:TAG]
, are namedNAME[--TAG].sh
. -
Other helper scripts are named
_*.sh
.
Furthermore, hdl/smoke-test is a submodule of this repository (test/smoke-test
). Smoke-tests contains fine grained tests that cover the most important functionalities of the tools. Those are used in other packaging projects too. Therefore, container tests are expected to execute the smoke-tests corresponding to the tools available in the image, before executing more specific tests.
There are a couple of helper scripts in .github/bin/
, for testing the images. Those are used in CI but can be useful locally too:
-
dockerTest IMAGE_NAME [SCRIPT_NAME]
-
IMAGE_NAME: image name without
hdlc/
prefix. -
(optional) SCRIPT_NAME: name of the test script, only required if it does not match
echo IMAGE_NAME | sed 's#:#--#'
.
-
-
dockerTestPkg TAG_NAME [DIR_NAME]
-
TAG_NAME: tag name (i.e. image name without
hdlc/pkg:
prefix). -
(optional) DIR_NAME: directory name inside the package image which needs to be copied to the temporary image for testing.
-
Step by step checklist
-
Create or update dockerfile(s).
-
For each tool, a Dockerfile recipe exists. All dockerfiles must use, at least, two stages.
-
One stage, named
build
, is to be based onhdlc/build:base
orhdlc/build:build
orhdlc/build:dev
. In this first stage, you need to add the missing build dependencies. Then, build the tool/project using the standardPREFIX
, but install to a custom location usingDESTDIR
. See Package images. -
If the tool/project is to be used standalone, create an stage based on
hdlc/build:base
. Install runtime dependencies only. -
If the tool/project is to be packaged, create an stage based on
scratch
. -
In any case, copy the tool artifacts from the build stage using
COPY --from=STAGE_NAME
. In practice, several dockerfiles produce at least one package image and one ready-to-use image. Therefore, dockerfiles will likely have more than two stages.
-
-
Some tools are to be added to existing images which include several tools (coloured BROWN in the [Graph]). After creating the dockerfile where the corresponding package image is defined, add
COPY --from=hdlc/pkg:TOOL_NAME
statements to the dockerfiles of multi-tool images.
-
-
Build and test the dockerfile(s) locally. Use helper scripts from
.github/bin
as explained in Build and Test. -
Create or update workflow(s).
-
For each dockerfile, a GitHub Actions workflow is added to
.github/workflows/
. Find documentation at Workflow syntax for GitHub Actions. Copying some of the existing workflows in this repo, and adapting it is suggested. -
In each workflow, all the images produced from stages of the corresponding dockerfile are built, tested and pushed.
dockerBuild
,dockerTest
,dockerTestPkg
anddockerPush
scripts from.github/bin
are used.
-
-
Update documentation
-
If a new tool was added,
-
Ensure that the tool is listed at hdl/awesome, since that’s where all the tool/projects in the table point to.
-
If a tool from the To Do list was added, remove it from the list.
-
Add a shield/badge to the table in Continuous Integration (CI).
-
-
Edit
doc/tools.yaml
. The table in Tools and images is autogenerated from that YAML file, usingdoc/gen_tool_table.py
-
Update the Graphs.
-
Continuous Integration (CI)
References
-
GHDL:
-
DBHI:
-
SymbiFlow:
-
-
bit.ly/edda-conda-eda-spec: Conda based system for FPGA and ASIC Dev
-
Support providing the environment using docker rather than conda #15
-
-
USB/IP