hdl containers f2f1ef hdl packages f2f1ef | community | hdl awesome f2f1ef hdl constraints f2f1ef hdl smoke—​tests f2f1ef

This repository contains scripts and GitHub Actions (GHA) YAML workflows for building, testing and deploying OCI images (aka Docker | Podman images) including open source electronic design automation (EDA) tooling. All of them are pushed to gcr.io/hdl-containers, and mirrored to ghcr.io/hdl and hub.docker.com/u/hdlc:

hdl—​containers 555555 hdl 555555 hdlc 555555

Do you want to improve this page? Please edit it on GitHub.

Tools and images

Multiple collections of images are provided. All the images in each collection do share the same base layer. Currently available collections are the following:

  • debian/buster based on debian:buster-slim

    • Available as gcr.io/hdl-containers/debian/buster/.

    • Mirrored to gcr.io/hdl-containers/, docker.io/hdlc/, ghcr.io/hdl/, and ghcr.io/hdl/debian/buster/.

  • centos/7 based on centos:7

    • Available as gcr.io/hdl-containers/centos/7/.

    • Mirrored to ghcr.io/hdl/centos/7/.

Image names and tags in this documentation are provided without the registry prefix. Hence, one of the prefixes listed above needs to be used when actually pulling/using the images. See Usage.
Tools marked with ! are NOT built from sources, but installed through system package managers.

Tool

Image

Included in

Package

Ready-to-use

S

I

F

P

Others

apicula

OCIImage:[pkg/apicula]

latest?longCache=true&style=flat square&label=apicula&logo=Docker&logoColor=fff

-

-

-

-

-

arache-pnr

OCIImage:[pkg/arachne-pnr]

latest?longCache=true&style=flat square&label=arachne pnr&logo=Docker&logoColor=fff

-

-

-

-

-

boolector

OCIImage:[pkg/boolector]

-

-

-

F

-

-

cocotb

-

-

S

-

-

-

-

cvc4

OCIImage:[pkg/cvc4]

-

-

-

F

-

-

ghdl

OCIImage:[pkg/ghdl] OCIImage:[pkg/ghdl/llvm]

latest?longCache=true&style=flat square&label=ghdl&logo=Docker&logoColor=fff llvm&logo=Docker&logoColor=fff

S

I

F

-

ghdl/yosys

ghdl-yosys-plugin

OCIImage:[pkg/ghdl-yosys-plugin]

yosys&logo=Docker&logoColor=fff

-

I

F

-

-

graphviz !

-

-

-

I

F

-

yosys, ghdl/yosys

gtkwave

OCIImage:[pkg/gtkwave]

-

-

-

-

-

-

icestorm

OCIImage:[pkg/icestorm]

latest?longCache=true&style=flat square&label=icestorm&logo=Docker&logoColor=fff

-

I

-

P

nextpnr/icestorm

klayout

OCIImage:[pkg/klayout]

latest?longCache=true&style=flat square&label=klayout&logo=Docker&logoColor=fff

-

-

-

-

-

magic

OCIImage:[pkg/magic]

latest?longCache=true&style=flat square&label=magic&logo=Docker&logoColor=fff

-

-

-

-

-

nextpnr

OCIImage:[pkg/nextpnr/generic] OCIImage:[pkg/nextpnr/ice40] OCIImage:[pkg/nextpnr/ecp5]

generic&logo=Docker&logoColor=fff ice40&logo=Docker&logoColor=fff ecp5&logo=Docker&logoColor=fff icestorm&logo=Docker&logoColor=fff prjtrellis&logo=Docker&logoColor=fff latest?longCache=true&style=flat square&label=nextpnr&logo=Docker&logoColor=fff

-

I

-

-

-

openocd !

-

-

-

-

-

P

-

prjtrellis

OCIImage:[pkg/prjtrellis]

latest?longCache=true&style=flat square&label=prjtrellis&logo=Docker&logoColor=fff

-

I

-

-

nextpnr/prjtrellis

superprove

OCIImage:[pkg/superprove]

-

-

-

F

-

-

symbiyosys

OCIImage:[pkg/symbiyosys]

-

-

-

F

-

-

verilator

OCIImage:[pkg/verilator]

latest?longCache=true&style=flat square&label=verilator&logo=Docker&logoColor=fff

S

-

-

-

-

vunit

-

-

S

-

-

-

-

yices2

OCIImage:[pkg/yices2]

-

-

-

F

-

-

xyce

OCIImage:[pkg/xyce]

latest?longCache=true&style=flat square&label=xyce&logo=Docker&logoColor=fff

-

-

-

-

-

yosys

OCIImage:[pkg/yosys]

latest?longCache=true&style=flat square&label=yosys&logo=Docker&logoColor=fff

-

I

F

-

ghdl/yosys

z3

OCIImage:[pkg/z3]

-

-

-

F

-

-

Images including multiple tools:

  • Simulation:

    • latest?longCache=true&style=flat square&label=sim&logo=Docker&logoColor=fff GHDL + Verilator

    • osvb?longCache=true&style=flat square&label=sim:osvb&logo=Docker&logoColor=fff cocotb, OSVVM and VUnit; on top of sim.

    • scypy slim?longCache=true&style=flat square&label=sim:scypy slim&logo=Docker&logoColor=fff matplotlib and numpy, on top of sim.

    • scypy?longCache=true&style=flat square&label=sim:scypy&logo=Docker&logoColor=fff osvb on top of scypy-slim.

    • octave slim?longCache=true&style=flat square&label=sim:octave slim&logo=Docker&logoColor=fff octave, on top of sim.

    • octave?longCache=true&style=flat square&label=sim:octave&logo=Docker&logoColor=fff osvb on top of octave-slim.

  • Implementation: GHDL + Yosys + nextpnr

    • ice40?longCache=true&style=flat square&label=impl:ice40&logo=Docker&logoColor=fff nextpnr-ice40 only, and icestorm?longCache=true&style=flat square&label=impl:icestorm&logo=Docker&logoColor=fff including icestorm.

    • ecp5?longCache=true&style=flat square&label=impl:ecp5&logo=Docker&logoColor=fff nextpnr-ecp5 only, and prjtrellis?longCache=true&style=flat square&label=impl:prjtrellis&logo=Docker&logoColor=fff including prjtrellis.

    • generic?longCache=true&style=flat square&label=impl:generic&logo=Docker&logoColor=fff nextpnr-generic only.

    • pnr?longCache=true&style=flat square&label=impl:pnr&logo=Docker&logoColor=fff all nextpnr targets (ecp5, ice40, and generic).

    • latest?longCache=true&style=flat square&label=impl&logo=Docker&logoColor=fff impl:pnr, including icestorm and prjtrellis.

  • Formal:

    • latest?longCache=true&style=flat square&label=formal&logo=Docker&logoColor=fff all solvers depending on Python 3.

    • min?longCache=true&style=flat square&label=formal:min&logo=Docker&logoColor=fff Z3 only.

    • all?longCache=true&style=flat square&label=formal:all&logo=Docker&logoColor=fff all solvers, depending on either Python 2 or Python 3.

  • Programming: latest?longCache=true&style=flat square&label=prog&logo=Docker&logoColor=fff

To Do:

Context

This project started in GitHub repository ghdl/ghdl (which was named tgingold/ghdl back then). The main purpose was testing GHDL on multiple GNU/Linux distributions (Debian, Ubuntu and Fedora), since Travis CI supported Ubuntu only and Docker. For each target platform, two images were used, one for building and another one for testing.

Later, most of the Docker related sources were split to repository ghdl/docker. There, some additional simulation tools were added, such as VUnit and GtkWave. Images including the ghdl-language-server were also added. When synthesis features were added to GHDL, and since it provides a plugin for Yosys, tools for providing a complete open source workflow were requested. Those were nextpnr, icestorm, prjtrellis, SymbiYosys, etc.

At some point, ghdl/docker had as much content related to non-GHDL tools, as resources related to the organisation. At the same time, SymbiFlow aimed at gathering open source projects for providing an integrated open source EDA solution. However, it did not have official container images and help was wanted. This repository was intially created for moving all the tools which were not part of GHDL, from ghdl/docker to symbiflow/containers. However, apart from adding known Verilog tools, the scope was widened. Hence, the repository was published in hdl/containers.

Usage

Image names and tags in this documentation are provided without the registry prefix. Hence, REGISTRY/[COLLECTION/]* needs to be prefixed to the image names shown in Tools and images.

Official guidelines and recommendations for using containers suggest keeping containers small and specific for each tool/purpose (see docs.docker.com: Best practices for writing Dockerfiles). That fits well with the field of web microservices, which communicate through TCP/IP and which need to be composed, scaled and balanced all around the globe.

However, tooling in other camps is expected to communicate using a shared or local filesystem and/or pipes; therefore, many users treat containters as lightweight virtual machines. That is, they put all the tools in a single (heavy) container. Those containers are typically not moved around as frequently as microservices, but cached on developers' workstations.

In this project, both paradigms are supported; fine-grained images are available, as well as all-in-one images.

hdl examples in im-tomu/fomu-workshop showcase a Makefile based solution that supports both strategies: the fine-grained pulling and the all-in-one approach. An environment variable (CONTAINER_ENGINE) is used for selecting which approach to use. For didactic purposes, both of them are used in Continuous Integration (CI). See im-tomu/fomu-workshop: .github/workflows/test.yml.

Fine-grained pulling

Ready-to-use images are provided for each tool, which contain the tool and the dependencies for it to run successfully. These are typically named <REGISTRY_PREFIX>/<TOOL_NAME>.

Since all the images in each collection are based on the same root image, pulling multiple images involves retrieving a few additional layers only. Therefore, this is the recommended approach for CI or other environments with limited resources.
These images are coloured GREEN in the Graphs.

All-in-one images

Multiple tools from fine-grained images are included in larger images for common use cases. These are named <REGISTRY_PREFIX>/<MAIN_USAGE>. This is the recommended approach for users who are less familiar with containers and want a quick replacement for full-featured virtual machines. Coherently, some common Unix tools (such as make or cmake) are also included in these all-in-one imags.

These images are coloured BROWN in the Graphs.

Tools with GUI

By default, tools with Graphical User Interface (GUI) cannot be used in containers, because there is no graphical server. However, there are multiple alternatives for making an X11 or Wayland server visible to the container. mviereck/x11docker and mviereck/runx are full-featured helper scripts for setting up the environment and running GUI applications and desktop environments in OCI containers. GNU/Linux and Windows hosts are supported, and security related options are provided (such as cookie authentication). Users of GTKWave, KLayout, nextpnr and other tools will likely want to try x11docker (and runx).

x11docker_klayout
Figure 1. Execution of KLayout in a container on Windows 10 (MSYS2/MINGW64) with mviereck/x11docker, mviereck/runx and VcxSrv.

USB/IP protocol support for Docker Desktop

Virtual Machines used on Windows for running either Windows Subsystem for Linux (WSL) or Docker Desktop by default do not support sharing USB devices with the containers. Only those that are identified as storage or COM devices can be bind directly. See microsoft/WSL#5158. That prevents using arbitrary drivers inside the containers. As a result, most container users on Windows do install board programming tools through MSYS2 (see hdl/MINGW-packages).

Nevertheless, USB/IP protocol allows passing USB device(s) from server(s) to client(s) over the network. As explained at kernel.org/doc/readme/tools-usb-usbip-README, on GNU/Linux, USB/IP is implemented as a few kernel modules with companion userspace tools. However, the default underlying Hyper-V VM machine (based on Alpine Linux) shipped with Docker Desktop (aka docker-for-win/docker-for-mac) does not include the required kernel modules. Fortunately, privileged docker containers allow installing missing kernel modules. The shell script in usbip/ supports customising the native VM in Docker Desktop for adding USB over IP support.

# Build kernel modules: in an unprivileged `alpine` container, retrieve the corresponding
# kernel sources, copy runtime config and enable USB/IP features, build `drivers/usb/usbip`
# and save `*.ko` artifacts to relative subdir `dist` on the host.
./run.sh -m

# Load/insert kernel modules: use a privileged `busybox` container to load kernel modules
# `usbip-core.ko` and `vhci-hcd.ko` from relative subdir `dist` on the host to the
# underlying Hyper-V VM.
./run.sh -l

# Build image `vhcli`, using `busybox` as a base, and including the
# [VirtualHere](https://www.virtualhere.com) GNU/Linux client for x86_64 along with the
# `*.ko` files built previously through `./run.sh -m`.
./run.sh -v
For manually selecting configuration options, building and inserting modules, see detailed procedure in gw0/docker-alpine-kernel-modules#usage.
Modules will be removed when the Hyper-V VM is restarted (i.e. when the host or Docker Desktop are restarted). For a permanent install, modules need to be copied to /lib/modules in the underlying VM, and /stc/modules needs to be configured accordingly. Use $(command -v winpty) docker run --rm -it --privileged --pid=host alpine nsenter -t 1 -m -u -n -i sh to access a shell with full permissions on the VM.
USB/IP is supported in Renode too. See renode.rtfd.io/en/latest/tutorials/usbip.

Example session

How to connect a Docker Desktop container to VirtualHere USB Server for Windows.

  • Start vhusbdwin64.exe on the host

  • Ensure that the firewall is not blocking it.

# Start container named 'vhclient'
./run.sh -s
# List usb devices available in the container
./run.sh -e lsusb
# LIST hubs/devices found by vhclient
./run.sh -c "LIST"
# Manually add to the client the hub/server running on the host
./run.sh -c "MANUAL HUB ADD,host.docker.internal:7575"

sleep 10

./run.sh -c "LIST"
# Use a remote device in the container
./run.sh -c "USE,<SERVER HOSTNAME>.1"

sleep 4

# Check that the device is now available in the container
./run.sh -e lsusb
There is an issue/bug in Docker Desktop (docker/for-win#4548) that prevents the container where the USB device is added from seeing it. The workaround is to execute the board programming tool in a sibling container. For example: docker run --rm --privileged */prog iceprog -t.

Alternatives

Using VirtualHere is the only solution we could successfully use in order to share FTDI devices (icestick boards) between a Windows 10 host and a Docker Desktop container running on the same host. However, since the USB/IP protocol is open source, we’d like to try any other (preferredly open and free source) server for Windows along with the default GNU/Linux usbip-tools. Should you know about any, please let us know!

We are aware of cezuni/usbip-win. However, it seems to be in very early development state and the install procedure is quite complex yet.

Serial (COM) devices can be shared with open source tools. On the one hand, hub4com from project com0com allows to publish a port through a RFC2217 server. On the other hand, socat can be used to link the network connection to a virtual tty device.

                   HOST                                           CONTAINER
        ---------------------------                 -------------------------------------
USB <-> | COMX <-> RFC2217 server | <-> network <-> | socat <-> /dev/ttySY <-> app/tool |
        ---------------------------                 -------------------------------------
REM On the Windows host
com2tcp-rfc2217.bat COM<X> <PORT>
# In the container
socat pty,link=/dev/ttyS<Y> tcp:host.docker.internal:<PORT>

It might be possible to replace hub4com with pyserial/pyserial. However, we did not test it.

Contributing

As explained in Tools and images and in Usage, multiple collections of images are provided. For each collection, a set of base images is provided, which are to be used for building and for runtime. These are defined in base.dockerfile. See, for instance, debian-buster/base.dockerfile. All the images in the ecosystem are based on these:

  • base?longCache=true&style=flat square&label=build:base&logo=Docker&logoColor=fff Debian Buster or CentOS 7, with updated ca-certificates, curl and Python 3.

  • build?longCache=true&style=flat square&label=build:build&logo=Docker&logoColor=fff based on base, includes clang and make.

  • dev?longCache=true&style=flat square&label=build:dev&logo=Docker&logoColor=fff based on build, includes cmake, libboost-all-dev and python3-dev.

Then, for each project/tool there are dockerfiles (one for each collection), a GitHub Actions workflow, and one or more test scripts. Those are used for:

  • Tools are built using <REGISTRY_PREFIX>/build images.

  • Package images based on scratch (and/or other reusable packages) are produced.

  • Ready-to-use images based on the runtime base image (<REGISTRY_PREFIX>/build/base) are produced.

  • Ready-to-use images are tested before uploading.

In some dockerfiles/workflows, Package images are created too. Those are based on scratch and contain pre-built assets. Therefore, they are not really useful per se, but meant to be used for building other. In fact, multiple tools are merged into ready-to-use images for common use cases (such as <REGISTRY_PREFIX>/impl, <REGISTRY_PREFIX>/formal or <REGISTRY_PREFIX>/prog).

Before working on adding or extending the support for a tool, please check the issues and pull requests; open an issue or let us know through the chat. Due to its distributed nature, someone might be working on that already!
Currently, many projects don’t use containers at all, hence, all images are generated in this repository. However, the workload is expected to be distributed between multiple projects in the ecosystem.

Graphs

Understanding how all the pieces in this project fit together might be daunting for newcomers. Fortunately, there is a map for helping maintainers and contributors traveling through the ecosystem. Subdir graph/ contains the sources of directed graphs, where the relations between workflows, dockerfiles, images and tests are shown.

(Graphviz)'s digraph format is used, hence, graphs can be rendered to multiple image formats. The SVG output is shown in Figure 2 describes which images are created in each map. See the details in the figure corresponding to the name of the subgraph: Base (Figure 3), Sim (Figure 4), Synth (Figure 5), Impl (Figure 6), Formal (Figure 7) and , ASIC (Figure 8). Multiple colours and arrow types are used for describing different dependency types. All of those are explained in the legend: Figure 9.

These graphs represent a single collection of images (the virtual aggregation of others). In practice, some tools might be missing in some collections. For instance, a tool might be available in Debian Buster based containers, but not in CentOS 7. That info is not tracked in the graphs yet. Please, see whether a dockerfile exists in the corresponding subdir.
Diagram
Figure 2. Subgraphs and images.
Diagram
Figure 3. Base: workflows, dockerfiles, images and tests.
Diagram
Figure 4. Sim: workflows, dockerfiles, images and tests.
Diagram
Figure 5. Synth: workflows, dockerfiles, images and tests.
Diagram
Figure 6. Impl: workflows, dockerfiles, images and tests.
Diagram
Figure 7. Formal: workflows, dockerfiles, images and tests.
Diagram
Figure 8. ASIC: workflows, dockerfiles, images and tests.
Diagram
Figure 9. Legend of the directed graph.

Package images

Each EDA tool/project is built once only for each collection in this image/container ecosystem. However, some (many) of the tools need to be included in multiple images for different purposes. Moreover, it is desirable to keep build recipes separated, in order to better understand the dependencies of each tool/project. Therefore, <REGISTRY_PREFIX>/pkg/ images are created/used (coloured BLUE in the Graphs). These are all based on scratch and are not runnable. Instead, they contain pre-built artifacts, to be then added into other images through COPY --from=.

Since <REGISTRY_PREFIX>/pkg/ images are not runnable per se, but an intermediate utility, the usage of environment variables PREFIX and DESTDIR in the dockerfiles might be misleading. All the tools in the ecosystem are expected to be installed into /usr/local, the standard location for user built tools on most GNU/Linux distributions. Hence:

  • PREFIX should typically not need to be modified. Most of the tools will default to PREFIX=/usr/local, which is correct. Yet, some tools might default to / or /usr. In those cases, setting it explicitly is required.

  • DESTDIR must be set to an empty location when calling make install or when copying the artifacts otherhow. The content of the corresponding package images is taken from that empty location. Therefore, if DESTDIR was unset, the artifacts of the tool might potentially be mixed with other existing assets in /usr/local. In most of the dockerfiles, /opt/TOOL_NAME is used as the temporary empty location.

Despite the usage of these variables being documented in GNU Coding Standards, DESTDIR seems not to be very used, except by packagers. As a result, contributors might need to patch the build scripts upstream. Sometimes DESTDIR is not supported at all, or it is supported but some lines in the makefiles are missing it. Do not hesitate to reach for help through the issues or the chat!

Utils

Some helper shell and Python utilities are available in utils/bin and utils/pyHDLC, respectively. A utils/setup.sh script is provided for installing Python dependencies and adding the bin subdir to the PATH. Since pip is used for installing utils/pyHDLC/requirements.txt, it is desirable to create a virtual environment before running setup.sh:

virtualenv venv
source venv/bin/activate
./utils/setup.sh

Build

dockerBuild helps building one or multiple images at once, by hiding all common options. It’s a wrapper around command build of pyHDLC.cli:

usage: cli.py build [-h] [-c COLLECTION] [-r REGISTRY] [-f DOCKERFILE] [-t TARGET] [-a ARGIMG] [-p] [-d] Image [Image ...]

positional arguments:
  Image                 image name(s), without registry prefix.

optional arguments:
  -h, --help            show this help message and exit
  -c COLLECTION, --collection COLLECTION
                        name of the collection/subset of images.
                        (default: debian/buster)
  -r REGISTRY, --registry REGISTRY
                        container image registry prefix.
                        (default: gcr.io/hdl-containers)
  -f DOCKERFILE, --dockerfile DOCKERFILE
                        dockerfile to be built, from the collection.
                        (default: None)
  -t TARGET, --target TARGET
                        target stage in the dockerfile.
                        (default: None)
  -a ARGIMG, --argimg ARGIMG
                        base image passed as an ARG to the dockerfile.
                        (default: None)
  -p, --pkg             preprend 'pkg/' to Image and set Target to 'pkg' (if unset).
                        (default: False)
  -d, --default         set default Dockerfile, Target and ArgImg options, given the image name(s).
                        (default: False)
DOCKERFILE defaults to Image if None.

Inspect

All ready-to-use images (coloured GREEN or BROWN in the Graphs) are runnable. Therefore, users/contributors can run containers and test the tools interactively or through scripting. However, since <REGISTRY_PREFIX>/pkg images are not runnable, creating another image is required in order to inspect their content from a container. For instance:

FROM busybox
COPY --from=<REGISTRY_PREFIX>/pkg/TOOL_NAME /TOOL_NAME /

In fact, dockerTestPkg uses a similar dockerfile for running .pkg.sh scripts from test/. See Test.

Alternatively, or as a complement, wagoodman/dive is a lightweight tool with a nice terminal based GUI for exploring layers and contents of container images. It can be downloaded as a tarball/zipfile, or used as a container:

docker run --rm -it \
  -v //var/run/docker.sock://var/run/docker.sock \
  wagoodman/dive \
  <REGISTRY_PREFIX>/IMAGE[:TAG]
wagoodman/dive
Figure 10. Inspection of <REGISTRY_PREFIX>/pkg/yosys with wagoodman/dive.

dockerDive is a wrapper around the wagoodman/dive container, which supports one or two arguments for specifying the image to be inspected. The default registry prefix is gcr.io/hdl-containers, however, it can be overriden through envvar HDL_REGISTRY.

For instance, inspect image gcr.io/hdl-containers/debian/buster/ghdl:

dockerDive debian/buster ghdl

or, inspect any image from any registry:

HDL_REGISTRY=docker.io dockerDive python:slim-buster

Test

There is a test script in test/ for each image in this ecosystem, according to the following convention:

  • Scripts for package images, <REGISTRY_PREFIX>/pkg/TOOL_NAME, are named TOOL_NAME.pkg.sh.

  • Scripts for other images, <REGISTRY_PREFIX>/NAME[:TAG], are named NAME[--TAG].sh.

  • Other helper scripts are named _*.sh.

Furthermore, hdl/smoke-test is a submodule of this repository (test/smoke-test). Smoke-tests contains fine grained tests that cover the most important functionalities of the tools. Those are used in other packaging projects too. Therefore, container tests are expected to execute the smoke-tests corresponding to the tools available in the image, before executing more specific tests.

There are a couple of helper scripts in utils/bin/, for testing the images. Those are used in CI but can be useful locally too:

  • dockerTest BASE_OS IMAGE_NAME [SCRIPT_NAME]

    • BASE_OS: set/collection of images (e.g. debian/buster).

    • IMAGE_NAME: image name without the prefix.

    • (optional) SCRIPT_NAME: name of the test script, only required if it does not match echo IMAGE_NAME | sed 's#:#--#'.

  • dockerTestPkg BASE_OS TAG_NAME [DIR_NAME]

    • BASE_OS: set/collection of images (e.g. debian/buster).

    • TAG_NAME: tag name (i.e. image name without <REGISTRY_PREFIX>/pkg/ prefix).

    • (optional) DIR_NAME: directory name inside the package image which needs to be copied to the temporary image for testing.

Step by step checklist

  1. Create or update dockerfile(s).

    • For each tool and collection, a Dockerfile recipe exists.

      • It is recommended, but not required, to add tools to multiple collections at the same time. That is, to create one dockerfile for each collection. Nevertheless, it is possible to add a tool to just one or to a limited set of collections.

      • All dockerfiles must use, at least, two stages.

        • One stage, named build, is to be based on <REGISTRY_PREFIX>/build/base or <REGISTRY_PREFIX>/build/build or <REGISTRY_PREFIX>/build/dev. In this first stage, you need to add the missing build dependencies. Then, build the tool/project using the standard PREFIX, but install to a custom location using DESTDIR. See Package images.

        • If the tool/project is to be used standalone, create an stage based on <REGISTRY_PREFIX>/build/base. Install runtime dependencies only.

        • If the tool/project is to be packaged, create an stage based on scratch.

        • In any case, copy the tool artifacts from the build stage using COPY --from=STAGE_NAME.

        • In practice, several dockerfiles produce at least one package image and one ready-to-use image. Therefore, dockerfiles will likely have more than two stages.

    • Some tools are to be added to existing images which include several tools (coloured BROWN in the Graphs). After creating the dockerfile where the corresponding package image is defined, add COPY --from=<REGISTRY_PREFIX>/pkg/TOOL_NAME statements to the dockerfiles of multi-tool images.

  2. Build and test the dockerfile(s) locally. Use helper scripts from .github/bin as explained in Build and Test.

    • If a new tool was added, or a new image is to be generated, a test script needs to be added to test/. See Test for naming guidelines.

    • Be careful with the order. If you add a new tool and include it in one of the multi-tool images, the package image needs to be built first.

  3. Create or update workflow(s).

    • For each tool or multi-tool image, a GitHub Actions workflow is added to .github/workflows/. Find documentation at Workflow syntax for GitHub Actions. Copying some of the existing workflows in this repo and adapting it is suggested.

    • In each workflow, all the images produced from stages of the corresponding dockerfile are built, tested and pushed. Scripts from .github/bin are used.

    • The workflow matrix is used for deciding which collections is each tool to be built for.

  4. Update the documentation.

Development

Continuous Integration (CI)

  • main?longCache=true&style=flat square&label=doc&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=base&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=ghdl&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=gtkwave&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=verilator&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=xyce&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=apicula&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=ghdl yosys plugin&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=icestorm&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=prjtrellis&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=yosys&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=nextpnr&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=arachne pnr&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=boolector&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=cvc4&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=pono&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=superprove&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=symbiyosys&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=yices2&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=z3&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=klayout&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=magic&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=formal&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=sim&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=impl&logo=GitHub%20Actions&logoColor=fff

  • main?longCache=true&style=flat square&label=prog&logo=GitHub%20Actions&logoColor=fff

At the moment, there is no triggering mechanism set up between different GitHub repositories. All the workflows in this repo are triggered by push events, CRON jobs, or manually.

Tasks

The strategical priorities are the following:

  • Add the missing ASIC tools to collection debian/buster. Following the order in efabless/openlane: OpenLANE Design Stages is suggested.

  • Add the missing tools to collection centos/7. This collection is currently empty (it contains the base dockerfile only). It was added because OpenLane containers are based on centos:7`, but may be updated to centos:8. Moreover, some DARPA users might be constrained to centos:6. We should confirm that.

  • Setup cross-triggering between CI workflows. Currently, all workflows are triggered at the same time. That produces some races and some tools are built twice in the same run (moby/buildkit#1930). That is not critical because we do know how to solve it (.github/trigger.sh). We didn’t implement it yet because we’d like it to be automatically sychronised with the graphs (see Graph generation/parsing).

  • Coordinate with Antmicro/SymbiFlow for using self-hosted runners provided by Google and their orchestration plumbing.

  • Mirror all the images to gcr.io. We need to read how does billing work in gcr.io. If it’s free, as docker.io and ghcr.io, we can register directly. Otherwise, we need to coordinate with mithro@Google.

  • Enhance the (atomic) smoke-tests, which are currently placeholders mostly.

  • Provide multiarch images (at least, add arm32v7 and arm64v8 variants). See dbhi/qus and Multi-arch build and images, the simple way.

  • Versioning. Currently, images are not versioned explicitly. Images are only pushed when builds and tests are successful. Users which cannot afford breaking changes can use the image by digest, instead of doing it by name. However, we should probably leverage manifests for publishing some versioned ecosystems, impliying that we run a full test suite on an specific group of images and we then tag them all together as a nicely behaving family/release.

These are all in no particular order, although most of them are closely related to each other. It you want to tackle any of them, let us know or join the chat!

Graph generation/parsing

Currently, the Graphs are updated manually. That is error prone, because the information shown in them is also defined in the dockerfiles and CI workflows. Ideally, graphs would be auto-generated by parsing/analysing those. However, the complexity of the graphs is non trivial. There are many tools for automatically generating diagrams from large datasets, but not so many with features for visualizing complex hierarchical nets. The requirements we are willing to satisfy are the following:

  • Open source tool and format(s).

  • Generates diagrams programmatically from a text representation and/or through a Python or Golang API.

  • Supports hierarchical DAGs with multiple levels of hierarchy. I.e. (nested) clusters which are nodes and have ports.

  • Generates SVG of the hierarchical DAGs.

  • Supports node styles (shape and colour).

  • Ideally:

    • Clusters are collapsible when shown in an HTML site.

    • Text in the nodes can be links to web sites.

So far, the following options were analysed:

Graphviz and Gephi are probably the most known tools for generating all kinds of diagrams. Unfortunately, none of them supports hierarchical clusters as required in this use case. The same issue applies to aafigure and graphdracula.

yEd is a very interesting product, and fits most (if not all) the technical constraints. However, it’s not open source. The editor is freeware and the SDK is paid. For programmatic generation, the SDK is required. The draw.io/mxgraph toolkit seems to be the most similar open source solution. However, using SDKs for building custom interactive diagraming tools feels overkill in this use case. We don’t need graphs to be editable through a GUI!

Similarly, Tikz and SVG generation libraries (such as svgo) can be used for having the work done, but they require writing a non-negligible plumbing which would increase the maintenance burden, instead of reducing it. This would be a last resort.

Although d3-hwschematic and netlistsvg are very different use cases, the JSON format used in elkjs might be suitable.

dagre-d3 is meant for DAGs and it supports nested clusters (experimentally, Dagre D3 Demo: Clusters). Although clusters seem not to have ports, it might be an easy update from the current solution. Since it’s a client-side JS library, it does not write an SVG file to disk by default, but achieving it should be trivial.

As a result, it seems that the most suitable solution might be using the JSON format from elkjs, either with elkjs or with dagre-d3. Yet, generating an SVG programmatically seems not to be as straightforward as using other solutions such as Graphviz’s dot. The following references illustrate advanced features for building custom views/GUIs/editors:

However, it seems that writing a JSON is cumbersome. On the one hand, some nodes need to have a size for them to be shown. On the other hand, it seems not possible to draw edges across hierarchies. Port need to be explicitly defined for that purpose. Therefore, the complexity of generating the JSON given a set of nodes, edges and clusters is non-trivial.

Branch utils/pyHDLC/map.py@pymap contains work in progress. First, GenerateMap builds a DAG by parsing the dockerfiles. Then, report prints the content in the terminal, for debugging purposes. Last, dotgraph generates a Graphviz dot diagram. The dot diagram does not have clusters. We want to add those by parsing the GitHub Actions workflows (see below). However, we want to first reproduce the dot output using elkjs. See function elkjsgraph in utils/pyHDLC/map.py@pymap. Do you want to give it a try? Let us know or join the chat!

Reading dockerfiles

One of the two sources of information for the graph are dockerfiles. As far as we are aware, there is no tool for generating a DAG from the stages of a dockerfile. However, asottile/dockerfile is an interesting Python module which wraps docker/moby’s golang parser. Hence, it can be used for getting the stages and COPY --from or --mount statements for generating the hierarchy. See utils/pyHDLC/map.py.

Reading GitHub Actions workflow files

The second source of information are CI workflow files. Since YAML is used, reading it from any language is trivial, however, semantic analysis needs to be done. Particularly, variables from matrix need to be expanded/replaced. nektos/act is written in golang, and it allows executing GitHub Actions workflows locally. Therefore, it might have the required features. However, as far as we are aware, it’s not meant to be used as a library.

References