hdl containers f2f1ef hdl packages f2f1ef | community | hdl awesome f2f1ef hdl constraints f2f1ef hdl smoke—​tests f2f1ef

This repository contains scripts and GitHub Actions (GHA) YAML workflows for building, testing and deploying OCI images (aka Docker | Podman images) including open source electronic design automation (EDA) tooling. All of them are pushed to hub.docker.com/u/hdlc.

Do you want to improve this page? Please edit it on GitHub.

Tools and images

Tool

Image

Included in

Package

Ready-to-use

I

F

P

Others

boolector

boolector?longCache=true&style=flat square&label=hdlc%2Fpkg:boolector&logo=Docker&logoColor=fff

-

-

F

-

-

cvc4

cvc4?longCache=true&style=flat square&label=hdlc%2Fpkg:cvc4&logo=Docker&logoColor=fff

-

-

F

-

-

ghdl

ghdl?longCache=true&style=flat square&label=hdlc%2Fpkg:ghdl&logo=Docker&logoColor=fff

latest?longCache=true&style=flat square&label=hdlc%2Fghdl&logo=Docker&logoColor=fff

I

F

-

ghdl:yosys

ghdl-yosys-plugin

ghdl yosys plugin?longCache=true&style=flat square&label=hdlc%2Fpkg:ghdl yosys plugin&logo=Docker&logoColor=fff

yosys?longCache=true&style=flat square&label=hdlc%2Fghdl:yosys&logo=Docker&logoColor=fff

I

F

-

-

graphviz

-

-

I

F

-

yosys, ghdl:yosys

gtkwave

gtkwave?longCache=true&style=flat square&label=hdlc%2Fpkg:gtkwave&logo=Docker&logoColor=fff

-

-

-

-

-

icestorm

icestorm?longCache=true&style=flat square&label=hdlc%2Fpkg:icestorm&logo=Docker&logoColor=fff

latest?longCache=true&style=flat square&label=hdlc%2Ficestorm&logo=Docker&logoColor=fff

I

-

P

nextpnr:icestorm

nextpnr

nextpnr generic?longCache=true&style=flat square&label=hdlc%2Fpkg:nextpnr generic&logo=Docker&logoColor=fff nextpnr ice40?longCache=true&style=flat square&label=hdlc%2Fpkg:nextpnr ice40&logo=Docker&logoColor=fff nextpnr ecp5?longCache=true&style=flat square&label=hdlc%2Fpkg:nextpnr ecp5&logo=Docker&logoColor=fff

generic?longCache=true&style=flat square&label=hdlc%2Fnextpnr:generic&logo=Docker&logoColor=fff ice40?longCache=true&style=flat square&label=hdlc%2Fnextpnr:ice40&logo=Docker&logoColor=fff ecp5?longCache=true&style=flat square&label=hdlc%2Fnextpnr:ecp5&logo=Docker&logoColor=fff icestorm?longCache=true&style=flat square&label=hdlc%2Fnextpnr:icestorm&logo=Docker&logoColor=fff prjtrellis?longCache=true&style=flat square&label=hdlc%2Fnextpnr:prjtrellis&logo=Docker&logoColor=fff latest?longCache=true&style=flat square&label=hdlc%2Fnextpnr&logo=Docker&logoColor=fff

I

-

-

-

openocd

-

-

-

-

P

-

prjtrellis

prjtrellis?longCache=true&style=flat square&label=hdlc%2Fpkg:prjtrellis&logo=Docker&logoColor=fff

latest?longCache=true&style=flat square&label=hdlc%2Fprjtrellis&logo=Docker&logoColor=fff

I

-

-

nextpnr:prjtrellis

superprove

superprove?longCache=true&style=flat square&label=hdlc%2Fpkg:superprove&logo=Docker&logoColor=fff

-

-

-

-

-

symbiyosys

symbiyosys?longCache=true&style=flat square&label=hdlc%2Fpkg:symbiyosys&logo=Docker&logoColor=fff

-

-

F

-

-

verilator

verilator?longCache=true&style=flat square&label=hdlc%2Fpkg:verilator&logo=Docker&logoColor=fff

latest?longCache=true&style=flat square&label=hdlc%2Fverilator&logo=Docker&logoColor=fff

-

-

-

-

yices2

yices2?longCache=true&style=flat square&label=hdlc%2Fpkg:yices2&logo=Docker&logoColor=fff

-

-

F

-

-

yosys

yosys?longCache=true&style=flat square&label=hdlc%2Fpkg:yosys&logo=Docker&logoColor=fff

latest?longCache=true&style=flat square&label=hdlc%2Fyosys&logo=Docker&logoColor=fff

I

F

-

ghdl:yosys

z3

z3?longCache=true&style=flat square&label=hdlc%2Fpkg:z3&logo=Docker&logoColor=fff

-

-

F

-

-

Images including multiple tools:

  • Implementation: GHDL + Yosys + nextpnr

    • ice40?longCache=true&style=flat square&label=hdlc%2Fimpl:ice40&logo=Docker&logoColor=fff nextpnr-ice40 only, and icestorm?longCache=true&style=flat square&label=hdlc%2Fimpl:icestorm&logo=Docker&logoColor=fff including icestorm.

    • ecp5?longCache=true&style=flat square&label=hdlc%2Fimpl:ecp5&logo=Docker&logoColor=fff nextpnr-ecp5 only, and prjtrellis?longCache=true&style=flat square&label=hdlc%2Fimpl:prjtrellis&logo=Docker&logoColor=fff including prjtrellis.

    • generic?longCache=true&style=flat square&label=hdlc%2Fimpl:generic&logo=Docker&logoColor=fff nextpnr-generic only.

    • pnr?longCache=true&style=flat square&label=hdlc%2Fimpl:pnr&logo=Docker&logoColor=fff all nextpnr targets (ecp5, ice40, and generic).

    • latest?longCache=true&style=flat square&label=hdlc%2Fimpl&logo=Docker&logoColor=fff impl:pnr, including icestorm and prjtrellis.

  • Formal:

    • latest?longCache=true&style=flat square&label=hdlc%2Fformal&logo=Docker&logoColor=fff all solvers depending on Python 3.

    • min?longCache=true&style=flat square&label=hdlc%2Fformal:min&logo=Docker&logoColor=fff Z3 only.

    • all?longCache=true&style=flat square&label=hdlc%2Fformal:all&logo=Docker&logoColor=fff all solvers, depending on either Python 2 or Python 3.

  • Programming: latest?longCache=true&style=flat square&label=hdlc%2Fprog&logo=Docker&logoColor=fff

To Do:

Context

This project started in GitHub repository ghdl/ghdl (which was named tgingold/ghdl back then). The main purpose was testing GHDL on multiple GNU/Linux distributions (Debian, Ubuntu and Fedora), since Travis CI supported Ubuntu only and Docker. For each target platform, two images were used, one for building and another one for testing.

Later, most of the Docker related sources were split to repository ghdl/docker. There, some additional simulation tools were added, such as VUnit and GtkWave. Images including the ghdl-language-server were also added. When synthesis features were added to GHDL, and since it provides a plugin for Yosys, tools for providing a complete open source workflow were requested. Those were nextpnr, icestorm, prjtrellis, SymbiYosys, etc.

At some point, ghdl/docker had as much content related to non-GHDL tools, as resources related to the organisation. At the same time, SymbiFlow aimed at gathering open source projects for providing an integrated open source EDA solution. However, it did not have official container images and help was wanted. This repository was intially created for moving all the tools which were not part of GHDL, from ghdl/docker to symbiflow/containers. However, apart from adding known Verilog tools, the scope was widened. Hence, the repository was published in hdl/containers.

Usage

Official guidelines and recommendations for using containers suggest keeping containers small and specific for each tool/purpose. That fits well with the field of web microservices, which communicate through TCP/IP and which need to be composed, scaled and balanced all around the globe.

However, tooling in other camps is expected to communicate using a shared or local filesystem and/or pipes; therefore, many users treat containters as lightweight virtual machines. That is, they put all the tools in a single (heavy) container. Those containers are typically not moved around as frequently as microservices, but cached on developers' workstations.

In this project, both paradigms are supported; fine-grained images are available, as well as all-in-one images.

Fine-grained pulling

Ready-to-use images are provided for each tool, which contain the tool and the dependencies for it to run successfully. These are typically named hdlc/<TOOL_NAME>. Since all of them are based on the same root image, pulling multiple images involves retrieving a few additional layers only. Therefore, this is the recommended approach for CI or other environments with limited resources.

  • There is an example at ghdl-yosys-blink: Makefile showcasing how to use this fine-grained approach with a makefile.

  • At marph91/icestick-remote the CI workflow for synthesis uses this approach.

  • Moreover, PyFPGA is a Python Class for vendor-independent FPGA development, which runs GHDL, Yosys, etc. in containers.

These images are coloured GREEN in the [Graph].

All-in-one images

Multiple tools from fine-grained images are included in larger images for common use cases. These are named hdlc/<MAIN_USAGE>. This is the recommended approach for users who are less familiar with containers and want a quick replacement for full-featured virtual machines. Coherently, some common Unix tools (such as make or cmake) are also included in these all-in-one imags.

These images are coloured BROWN in the [Graph].

Tools with GUI

By default, tools with Graphical User Interface (GUI) cannot be used in containers, because there is no graphical server. However, there are multiple alternatives for making an X11 or Wayland server visible to the container. mviereck/x11docker and mviereck/runx are full-featured helper scripts for setting up the environment and running GUI applications and desktop environments in OCI containers. GNU/Linux and Windows hosts are supported, and security related options are provided (such as cookie authentication). Users of GTKWave, nextpnr and other EDA tools will likely want to try x11docker (and runx).

USB/IP protocol support for Docker Desktop

USB/IP protocol allows to pass USB device(s) from server(s) to client(s) over the network. As explained at kernel.org/doc/readme/tools-usb-usbip-README, on GNU/Linux, USB/IP is implemented as a few kernel modules with companion userspace tools. However, the default underlying Hyper-V VM machine (based on Alpine Linux) shipped with Docker Desktop (aka docker-for-win/docker-for-mac) does not include the required kernel modules. Fortunately, privileged docker containers allow to install missing kernel modules. The shell script in usbip/ supports customising the native VM in Docker Desktop for adding USB over IP support.

# Build kernel modules: in an unprivileged `alpine` container, retrieve the corresponding
# kernel sources, copy runtime config and enable USB/IP features, build `drivers/usb/usbip`
# and save `*.ko` artifacts to relative subdir `dist` on the host.
./run.sh -m

# Load/insert kernel modules: use a privileged `busybox` container to load kernel modules
# `usbip-core.ko` and `vhci-hcd.ko` from relative subdir `dist` on the host to the
# underlying Hyper-V VM.
./run.sh -l

# Build image `vhcli`, using `busybox` as a base, and including the
# [VirtualHere](https://www.virtualhere.com) GNU/Linux client for x86_64 along with the
# `*.ko` files built previously through `./run.sh -m`.
./run.sh -v
For manually selecting configuration options, building and inserting modules, see detailed procedure in gw0/docker-alpine-kernel-modules#usage.
Modules will be removed when the Hyper-V VM is restarted (i.e. when the host or Docker Desktop are restarted). For a permanent install, modules need to be copied to /lib/modules in the underlying VM, and /stc/modules needs to be configured accordingly. Use $(command -v winpty) docker run --rm -it --privileged --pid=host alpine nsenter -t 1 -m -u -n -i sh to access a shell with full permissions on the VM.

Example session

How to connect a Docker Desktop container to VirtualHere USB Server for Windows.

  • Start vhusbdwin64.exe on the host

  • Ensure that the firewall is not blocking it.

# Start container named 'vhclient'
./run.sh -s
# List usb devices available in the container
./run.sh -e lsusb
# LIST hubs/devices found by vhclient
./run.sh -c "LIST"
# Manually add to the client the hub/server running on the host
./run.sh -c "MANUAL HUB ADD,host.docker.internal:7575"

sleep 10

./run.sh -c "LIST"
# Use a remote device in the container
./run.sh -c "USE,<SERVER HOSTNAME>.1"

sleep 4

# Check that the device is now available in the container
./run.sh -e lsusb
There is an issue/bug in Docker Desktop (docker/for-win#4548) that prevents the container where the USB device is added from seeing it. The workaround is to execute the board programming tool in a sibling container. For example: docker run --rm --privileged hdlc/prog iceprog -t.

Alternatives

Using VirtualHere is the only solution we could successfully use in order to share FTDI devices (icestick boards) between a Windows 10 host and a Docker Desktop container running on the same host. However, since the USB/IP protocol is open source, we’d like to try any other (preferredly open and free source) server for Windows along with the default GNU/Linux usbip-tools. Should you know about any, please let us know!

We are aware of cezuni/usbip-win. However, it seems to be in very early development state and the install procedure is quite complex yet.

Serial (COM) devices can be shared with open source tools. On the one hand, hub4com from project com0com allows to publish a port through a RFC2217 server. On the other hand, socat can be used to link the network connection to a virtual tty device.

                   HOST                                           CONTAINER
        ---------------------------                 -------------------------------------
USB <-> | COMX <-> RFC2217 server | <-> network <-> | socat <-> /dev/ttySY <-> app/tool |
        ---------------------------                 -------------------------------------
REM On the Windows host
com2tcp-rfc2217.bat COM<X> <PORT>
# In the container
socat pty,link=/dev/ttyS<Y> tcp:host.docker.internal:<PORT>

It might be possible to replace hub4com with pyserial/pyserial. However, we did not test it.

Contributing

This repository provides a set of base images for building and for runtime: base.dockerfile. All the images in the ecosystem are based on these:

  • base?longCache=true&style=flat square&label=hdlc%2Fbuild:base&logo=Docker&logoColor=fff Debian Buster with updated ca-certificates, curl and Python 3.

  • build?longCache=true&style=flat square&label=hdlc%2Fbuild:build&logo=Docker&logoColor=fff based on base, includes clang and make.

  • dev?longCache=true&style=flat square&label=hdlc%2Fbuild:dev&logo=Docker&logoColor=fff based on build, includes cmake, libboost-all-dev and python3-dev.

Then, for each project/tool there are a dockerfile, a GitHub Actions workflow, and one or more test scripts. Those are used for:

  • Tools are built using hdlc/build images.

  • Package images based on scratch (and/or other reusable packages) are produced.

  • Ready-to-use images based on the runtime base image (hdlc/build:base) are produced.

  • Ready-to-use images are tested before uploading.

In some dockerfiles/workflows, Package images are created too. Those are based on scratch and contain pre-built assets. Therefore, they are not really useful per se, but meant to be used for building other. In fact, multiple tools are merged into ready-to-use images for common use cases (such as hdlc/impl, hdlc/formal or hdlc/prog).

Before working on adding or extending the support for a tool, please check the issues and pull requests; open an issue or let us know through the chat. Due to its distributed nature, someone might be working on that already!
Currently, many projects don’t use containers at all, hence, all images are generated in this repository. However, the workload is expected to be distributed between multiple projects in the ecosystem.

Graphs

Understanding how all the pieces in this project fit together might be daunting for newcomers. Fortunately, there is a map for helping maintainers and contributors traveling through the ecosystem. Subdir graph/ contains the sources of directed graphs, where the relations between workflows, dockerfiles, images and tests are shown.

(Graphviz)'s digraph format is used, hence, graphs can be rendered to multiple image formats. The SVG output is shown in Figure 1 describes which images are created in each map. See the details in the figure corresponding to the name of the subgraph: Base (Figure 2), Sim (Figure 3), Synth (Figure 4), Impl (Figure 5), Formal (Figure 6). Multiple colours and arrow types are used for describing different dependency types. All of those are explained in the legend: Figure 7.

Diagram
Figure 1. Workflows, dockerfiles, images and tests.
Diagram
Figure 2. Base: workflows, dockerfiles, images and tests.
Diagram
Figure 3. Sim: workflows, dockerfiles, images and tests.
Diagram
Figure 4. Synth: workflows, dockerfiles, images and tests.
Diagram
Figure 5. Impl: workflows, dockerfiles, images and tests.
Diagram
Figure 6. Formal: workflows, dockerfiles, images and tests.
Diagram
Figure 7. Legend of the directed graph.

Package images

Each tool/project is built once only in this image/container ecosystem. However, some (many) of the tools need to be included in multiple images for different purposes. Moreover, it is desirable to keep build recipes separated, in order to better understand the dependencies of each tool/project. Therefore, hdlc/pkg:* images are created/used (coloured BLUE in the [Graph]). These are all based on scratch and are not runnable. Instead, they contain pre-built artifacts, to be then added into other images through COPY --from=.

Since hdlc/pkg:* images are not runnable per se, but an intermediate utility, the usage of environment variables PREFIX and DESTDIR in the dockerfiles might be misleading. All the tools in the ecosystem are expected to be installed into /usr/local, the standard location for user built tools in most GNU/Linux distributions. Hence:

  • PREFIX should typically not need to be modified. Most of the tools will default to PREFIX=/usr/local, which is correct. Yet, some tools might default to / or /usr. In those cases, setting it explicitly is required.

  • DESTDIR must be set to an empty location when calling make install or when copying the artifacts otherhow. The content of the corresponding package images is taken from that empty location. Therefore, if DESTDIR was unset, the artifacts of the tool might potentially be mixed with other existing assets in /usr/local. In most of the dockerfiles, /opt/TOOL_NAME is used as the temporary empty location.

Despite the usage of these variables being documented in GNU Coding Standards, DESTDIR seems not to be very used, except by packagers. As a result, contributors might need to patch the build scripts upstream. Sometimes DESTDIR is not supported at all, or it is supported but some lines in the makefiles are missing it. Do not hesitate to reach for help through the issues or the chat!

Build

A helper script is provided in .github/bin/ to ease building images, by hiding all common options:

  • dockerBuild IMAGE_NAME DOCKERFILE_NAME [TARGET_STAGE]

    • IMAGE_NAME: image name without hdlc/ prefix.

    • DOCKERFILE_NAME: name of the dockerfile to be used.

    • (optional) TARGET_STAGE: name of that target stage in the dockerfile.

Inspect

All ready-to-use images (coloured GREEN or BROWN in the [Graph]) are runnable. Therefore, users/contributors can run containers and test the tools interactively or through scripting. However, since hdlc/pkg images are not runnable, creating another image is required in order to inspect their content from a container. For instance:

FROM busybox
COPY --from=hdlc/pkg:TOOL_NAME /TOOL_NAME /

In fact, .github/bin/dockerTestPkg uses a similar dockerfile for running *.pkg.sh scripts from test/. See Test.

Alternatively, or as a complement, wagoodman/dive is a lightweight tool with a nice terminal based GUI for exploring layers and contents of container images. It can be downloaded as a tarball/zipfile, or used as a container:

docker run --rm -it \
  -v //var/run/docker.sock://var/run/docker.sock \
  wagoodman/dive \
  hdlc/IMAGE[:TAG]
wagoodman/dive
Figure 8. Inspection of hdlc/pkg:yosys with wagoodman/dive.

Test

There is a test script in test/ for each image in this ecosystem, according to the following convention:

  • Scripts for package images, hdlc/pkg:TOOL_NAME, are named TOOL_NAME.pkg.sh.

  • Scripts for other images, hdlc/NAME[:TAG], are named NAME[--TAG].sh.

  • Other helper scripts are named _*.sh.

Furthermore, hdl/smoke-test is a submodule of this repository (test/smoke-test). Smoke-tests contains fine grained tests that cover the most important functionalities of the tools. Those are used in other packaging projects too. Therefore, container tests are expected to execute the smoke-tests corresponding to the tools available in the image, before executing more specific tests.

There are a couple of helper scripts in .github/bin/, for testing the images. Those are used in CI but can be useful locally too:

  • dockerTest IMAGE_NAME [SCRIPT_NAME]

    • IMAGE_NAME: image name without hdlc/ prefix.

    • (optional) SCRIPT_NAME: name of the test script, only required if it does not match echo IMAGE_NAME | sed 's#:#--#'.

  • dockerTestPkg TAG_NAME [DIR_NAME]

    • TAG_NAME: tag name (i.e. image name without hdlc/pkg: prefix).

    • (optional) DIR_NAME: directory name inside the package image which needs to be copied to the temporary image for testing.

Step by step checklist

  1. Create or update dockerfile(s).

    • For each tool, a Dockerfile recipe exists. All dockerfiles must use, at least, two stages.

      • One stage, named build, is to be based on hdlc/build:base or hdlc/build:build or hdlc/build:dev. In this first stage, you need to add the missing build dependencies. Then, build the tool/project using the standard PREFIX, but install to a custom location using DESTDIR. See Package images.

      • If the tool/project is to be used standalone, create an stage based on hdlc/build:base. Install runtime dependencies only.

      • If the tool/project is to be packaged, create an stage based on scratch.

      • In any case, copy the tool artifacts from the build stage using COPY --from=STAGE_NAME. In practice, several dockerfiles produce at least one package image and one ready-to-use image. Therefore, dockerfiles will likely have more than two stages.

    • Some tools are to be added to existing images which include several tools (coloured BROWN in the [Graph]). After creating the dockerfile where the corresponding package image is defined, add COPY --from=hdlc/pkg:TOOL_NAME statements to the dockerfiles of multi-tool images.

  2. Build and test the dockerfile(s) locally. Use helper scripts from .github/bin as explained in Build and Test.

    • If a new tool was added, or a new image is to be generated, a test script needs to be added to test/. See Test for naming guidelines.

    • Be careful with the order. If you add a new tool and include it in one of the multi-tool images, the package image needs to be built first.

  3. Create or update workflow(s).

    • For each dockerfile, a GitHub Actions workflow is added to .github/workflows/. Find documentation at Workflow syntax for GitHub Actions. Copying some of the existing workflows in this repo, and adapting it is suggested.

    • In each workflow, all the images produced from stages of the corresponding dockerfile are built, tested and pushed. dockerBuild, dockerTest, dockerTestPkg and dockerPush scripts from .github/bin are used.

  4. Update documentation

Continuous Integration (CI)

  • main?longCache=true&style=flat square&label=doc&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=base&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=ghdl&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=gtkwave&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=verilator&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=ghdl yosys plugin&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=icestorm&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=prjtrellis&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=yosys&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=nextpnr&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=boolector&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=cvc4&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=superprove&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=symbiyosys&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=yices2&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=z3&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=formal&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=impl&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=prog&logo=GitHub%20Actions&logoColor=fff

At the moment, there is no triggering mechanism set up between different GitHub repositories. All the workflows in this repo are triggered by push event, CRON jobs, or manually.

References