Leassons learnt running a Hugo website with multiple editors
9 months ago we launched a new website for the student council. We switched from a website that used the CMS Plone to the static page generator Hugo. On big difference from a users perspective is that an editor is missing completely in Hugo while Plone comes with a WYSIWYG editor that can be accessed over the web. I will go over some of the problems that we encountered and the lessons learned.
The Setup⌗
To give some context let’s start with a small overview of our setup. Here is a diagram of our CI pipeline:
Every time a change is pushed to the git repository the following actions happen:
Lint markdown: The markdown files are linted to enforce a consistent style. For this we use markdownlint . Headings or unordered lists for example can be expressed in several different ways.Lint Other: Lints any other files like config files and html templates. It is based on prettier .Build Hugo: Builds the website using Hugo. Hugo ensures that all internal links are valid.Lint HTML: Checks that the generated HTML does not contain invalid external links. For this we use htmltest .Deploy: If all checks have gone through, a container image with a web server and the website is built and pushed into our registry. The image will then be picked up by the watchertower which restarts container with the new image.
The missing editor⌗
How we planned it⌗
During the development we had to have the code and all the tools installed locally. This has the benefit the we can run the linters automatically before a commit with pre-commit hooks. Without the checks you could push content that breaks the CI because it does not follow the style guidelines. We had also gained some experience with this setup during the initial development of the website. So this was the recommended setup for editing the website.
The local setup required the 5 steps to get up and running:
- install 4 commonly available packages (
hugo,npm,gitandgit-lfs) - clone the repository
- setup Git LFS
- setup
pre-commit - install the npm packages
All the steps were thoroughly documented. The steps required less than 10 commands. Most of the commands could be copied directly from the documentation. Still this was too much for most.
What users did instead⌗
Many users independently started using the Web IDE . The Web IDE is a minimal Visual Studio Code (OSS) that is integrated with GitLab and runs in the browser. Repositories and specific files in a repository can easily be opened in the Web IDE. The linters were unfortunately not available in the Web IDE because it lacked Extension support.
This far too often lead to the course of events
- User adds incorrectly formatted changes through the Web IDE
- Build breaks and user is notified by Mail.
- User has to come back and make another commit to fix their changes.
- User is, understandably, annoyed.
After all, who would want to wait 5 minutes so see if their change breaks something? Certainly not me!
Embracing the Web IDE⌗
So instead of trying to change the users to use our workflow (most likely futile), we decided to embrace the Web IDE. We tried our best to make editing in the Web IDE easier.
- Preconfigure an editor ruler to indicate the maximum line-length in
.vscode/settings.json. - Migrate to JS based linting tools so that they could also be run in the Web IDE.
- prettier is JS based and has an extension for VS Code. So there is no work to do here.
- For the linting of markdown we switched from markdownlint/markdownlint (Ruby) to DavidAnson/markdownlint (JS). The JS version comes with an extension for VS Code. The JS version is heavily inspired by the Ruby version which resulted in similar functionalty and configuration formats. This made the migration quick and easy.
- htmltest works with the HTML generated by hugo. Only the markdown is available Web IDE and the HTML cannot be generated because hugo cannot be run there. The errors from htmltest were mostly caused by flaky website that we linked and rarely by style violations. So we decided to not look for a JS alternative.
The only thing that was left now is for the Web IDE to support extensions and we’d be golden. This feature is tracked in &7685 and seem to be close to the finish line. Unfortunately it has been in this state for quite some months now.

Conclusions⌗
- From this I learned that I (as a developer) always overestimate the usability and the effort users are willing to go to. Documentation is a necessity but not sufficient. Many users take to complaining before even reading the documentation. The workflow thus has to be extremely easy.
- Timelines can always change. Some timelines are less reliable than others. Know which are which. If in doubt, only assume features that exist today.
Testing for dead links⌗
Another component that also sometimes caused problems was htmltest. htmltest processes the HTML generated by hugo. Among others it ensures that links are not dead by checking that linked pages return a 2xx HTTP status code. This is by far our most flaky check. The reasons include:
- Rate-limiting from websites that were called too often
- Random outages from websites
- Our faculty’s page that is unavailable every night

In the end we exluded the most unstable pages and those with strict rate-limiting from this test. These pages introduced to much instability into the test for it to be worth testing them.
A solution to this problem would be that the test only fails if a page has been unavailable for an extended period. Large time delays until a new error is raised makes them harder to fix, because it is unclear who is responsible to fix it. But this leads to the trade-off underlying this problem.
We nudge users towards only commit “correct” code by failing the pipeline otherwise. This way users that introduce errors are animated to fix their errors.
The errors we detect with htmltest mostly have external causes. A user adds a link that is valid when added. After some time this link may break through no fault of the user. Is the original user (which could be gone by now) still responsible to fix the link?
In our case I would say no. A better solution for our problem would be to only check newly introduced links in the usual pipeline. This ensures that links are valid when they are created. In addition to that run another pipeline regularly that checks all links and notifies a team of maintainers.