1. Introduction and Goals

1.1. create awesome docs!

docToolchain is an implementation of the docs-as-code approach for software architecture plus some additional automation. The basis of docToolchain is the philosophy that software documentation should be treated in the same way as code together with the arc42 template for software architecture.

How it all began…​

1.1.1. docs-as-code

Before this project started, I wasn’t aware of the term docs-as-code. I just grew tired of keeping all my architecture diagrams up to date by copiing them from my UML tool over to my word processor.

As a lazy developer, I told myself 'there has to be a better way of doing this'. And I started to automate the diagram export and switch from a full fledged word processor over to a markup renderer. This enable me to reference the diagrams from within my text and update them just before I render the document.

1.1.2. arc42

Since my goal was to document software architectures, I was already using arc42 - a template for software architecture. At this time, it used the MS Word template.

But what is arc42?

Dr. Gernot Starke and Peter Hruschka created this template in a joint effort to create a standard for software architecture documents. The dumped all their experience about software architectures into not only a structure but also explaining texts. These explanations are part of every chapter of the template and give you guidance on how to write each chapter of the document.

arc42 is available in many formats like MS Word, textile and Confluence and all these formats are automatically generated from one golden master which is formatted in asciidoc.

1.1.3. docToolchain

In order to follow the docs-as-code approach, you need a build script which automates steps like exporting diagrams and rendering the used markdown (asciidoc in case of docToolchain) to the target format.

Unfortunately, such a build script is not easy to create in the first place ('how do I create .docx?', 'why does lib x not work with lib y?') and it is also not too easy to maintain.

docToolchain is the result of my journey through the docs-as-code land. The goal is to have an easy to use build script which only has to be configured and not modified and which is maintained by a community as open source software.

The technical steps of my journey are written down in my blog: https://rdmueller.github.io.

Let’s start with what you’ll get when you use docToolchain…​

1.2. Benefits of the docs-as-code Approach

You want to write technical docs for your software project. So it is very likely that you already have the tools and proccesses to work with source code in place. Why not also use it for your docs?

1.2.1. Document Management System

By using a version control system like Git, you get a perfect document management system for free. It let’s you version your docs, branch them and gives you an audit trail. You are even able to check who wrote which part of the docs. Isn’t that great?

Since your docs are now just plain text, it is also easy to do a diff and see exactly what has changed.

And when you store your docs in the same repository as your code, you always have both in sync!

1.2.2. Collaboration and Review Process

Git as a distributed version control system let’s you even collaborate on your docs. People can fork the docs and send you pull requests for the changes they made. By reviewing the pull request, you have a perfect review process out of the box - by accepting the pull request, you show that you’ve reviewed and accepted the changes. Most git frontends like Bitbucket, Gitlab and of course Github also allow you to reject pull requests with comments.

1.2.3. Image References and Code Snippets

Instead of pasting images to a binary document format, you now can reference images. This will ensure that those images are always up to date every time you rebuild your documents.

In addition, you can reference code snippets directly from your source code. This way, these snippets are also always up to date!

1.2.4. Compound and Stakeholder-Tailored Docs

Since you can not only reference images and code snippets but also sub-documents, you can split your docs into several sub-documents and a master which brings all those docs together. But you are not restricted to one master - you can create master docs for several different stakeholder which only contain the chapters needed for them.

1.2.5. many more Features…​

If you can dream it, you can script it.

  • Want to include a list of open issues from Jira? Check.

  • Want to include a changelog from Git? Check.

  • Want to use inline, text based diagrams? Check.

  • and many more…​

2. How to install docToolchain

2.1. Get the tool

To start with docToolchain you need to get a copy of the current docToolchain repository. The easiest way is to clone the repository without history and remove the .git folder:

Linux with git clone
git clone --recursive https://github.com/docToolchain/docToolchain.git <docToolchain home>
rm -rf .git
rm -rf resources/asciidoctor-reveal.js/.git
rm -rf resources/reveal.js/.git

--recursive option is required because the repository contains 2 submodules - resources/asciidoctor-reveal.js and resources/reveal.js.

Another way is to download the zipped git repository and rename it:

Linux with download as zip
wget https://github.com/docToolchain/docToolchain/archive/master.zip
unzip master.zip

# fetching dependencies

cd docToolchain-master/resources

rm -d reveal.js
wget https://github.com/hakimel/reveal.js/archive/tags/3.3.0.zip -O reveal.js.zip
unzip reveal.js.zip
mv reveal.js-tags-3.3.0 reveal.js

rm -d asciidoctor-reveal.js
wget https://github.com/asciidoctor/asciidoctor-reveal.js/archive/9667f5c.zip -O asciidoctor-reveal.js.zip
unzip asciidoctor-reveal.js.zip
mv asciidoctor-reveal.js-9667f5c5d926b3be48361d6d6413d3896954894c asciidoctor-reveal.js

mv docToolchain-master <docToolchain home>

If you work (like me) on a Windows environment, just download and unzip the repository as well as its dependencies: reveal.js and asciidoctor-reveal.js.

After unzipping, put the dependencies in resources folder, so that the structure is the same as on GitHub.

You can add <docToolchain home>/bin to your PATH or you can run doctoolchain with full path if you prefer.

2.2. Initialize directory for documents

The next step after getting docToolchain is to initialize a directory where your documents live. In docToolchain this directory is named "newDocDir" during initialization, or just "docDir" later on.

2.2.1. Existing documents

If you already have some existing documents in asciidoc format in your project, you need to put the configuration file there to inform docToolchain what and how to process. You can do that manually by copying the contents of template_config directory. You can also do that by running initExisting task.

Linux initExisting example
cd <docToolchain home>
./gradlew -b init.gradle initExisting -PnewDocDir=<your directory>

You need to open Config.groovy file and configure names of your files properly. You may also change the PDF schema file to your taste.

2.2.2. Arc42 from scratch

If you don’t have existing documents yet, or if you need a fresh start, you can get the Arc42 template in asciidoc format. You can do that by manually downloading from http://arc42.org/download. You can also to that by running initArc42<language> task. Currently supported languages are:

  • DE - German.

  • EN - English.

  • ES - Spanish.

Linux initArc42EN example
cd <docToolchain home>
./gradlew -b init.gradle initArc42EN -PnewDocDir=<newDocDir>

The Config.groovy file is then preconfigured to use the downloaded template.

2.3. Build

This should already be enough to start a first build.

doctoolchain <docDir> generateHTML
doctoolchain <docDir> generatePDF
doctoolchain.bat <docDir> generateHTML
doctoolchain.bat <docDir> generatePDF

<docDir> may be relative, e.g. ".", or absolute.

As a result, you will see the progress of your build together with some warnings which you can just ignore for the moment.

The first build generated some files within the <docDir>/build:

|-- html5
|   |-- arc42-template.html
|   `-- images
|       |-- 05_building_blocks-EN.png
|       |-- 08-Crosscutting-Concepts-Structure-EN.png
|       `-- arc42-logo.png
`-- pdf
    |-- arc42-template.pdf
    `-- images
        |-- 05_building_blocks-EN.png
        |-- 08-Crosscutting-Concepts-Structure-EN.png
        `-- arc42-logo.png

Congratulations! if you see the same folder structure, you just managed to render the standard arc42 template as html and pdf!

If you didn’t get the right output, please raise an issue on github

2.4. Publish on Confluence

In addition to Config.groovy there is also a scripts/ConfluenceConfig.groovy file. If you are not using Confluence you can remove it. If you use Confluence, then you need to open this file and adapt to your environment. You can also create multiple copies of that file. For example you can have ConfluenceConfig.groovy for publishing official pages, and MyConfluenceConfig.groovy with a different Confluence space for reviews.

The paths to those configuration files can be provided by giving -P option to doctoolchain, for example:

# Uses scripts/ConfluenceConfig.groovy by default
doctoolchain <docDir> publishToConfluence --no-daemon -q

# Uses scripts/MyConfluenceConfig.groovy
doctoolchain <docDir> publishToConfluence -PconfluenceConfigFile=scripts/MyConfluenceConfig.groovy --no-daemon -q

3. Overview of available Tasks

This chapter explains all docToolchain specific tasks.

The following picture gives an overview of the whole build system:

Figure 1. docToolchain

3.1. Conventions

There are some simple naming conventions for the tasks. They might be confusing at first and that’s why they are explained here.

3.1.1. generateX

render would have been another good prefix, since these tasks use the plain asciidoctor functionality to render the source to a given format.

3.1.2. exportX

These tasks export images and AsciiDoc snippets from other systems or file formats. The resulting artefacts can then be included from your main sources.

What’s different to the generateX tasks is that you don’t need to export with each build.

It is also likely that you have to put the resulting artefacts under version control because the tools needed for the export (like Sparx Enterprise Architect or MS PowerPoint) are likely to be not available on a build server or on another contributors machine.

3.1.3. convertToX

These tasks take the output from asciidoctor and convert it (through other tools) to the target format. This results in a dependency on a generateX task and another external tool (currently pandoc).

3.1.4. publishToX

These tasks not only convert your documents but also deploy/publish/move them to a remote system — currently Confluence. This means that the result is likely to be visible immediately to others.

3.2. generateHTML


This is the standard asciidoctor generator which is supported out of the box.

The result is written to build/docs/html5. The HTML files need the images folder to be in the same directory to display correctly.

if you would like to have a single-file HTML as result, you can configure asciidoctor to store the images inline as data-uri.
Just set :data-uri: in the config of your AsciiDoc file.
But be warned - such a file can be easily very big and some browsers might get into trouble rendering them.

3.2.1. Text based Diagrams

For docToolchain, it is configured to use the asciidoctor-diagram plugin which is used to create plantUML diagrams.

The plugin also supports a bunch of other text based diagrams, but plantUML is the most used.

To use it, just specify your plantUML code like this:

.example diagram
[plantuml, "{plantUMLDir}demoPlantUML", png] (1)
class BlockProcessor
class DiagramBlock
class DitaaBlock
class PlantUmlBlock

BlockProcessor <|-- DiagramBlock
DiagramBlock <|-- DitaaBlock
DiagramBlock <|-- PlantUmlBlock
1 The element of this list specifies the diagram tool plantuml to be used.
The second element is the name of the image to be created and the third specifies the image type.
the {plantUMLDir} ensures that plantUML also works for the generatePDF task. Without it, generateHTML works fine, but the PDF will not contain the generated images.
make sure to specify a unique image name for each diagram, otherwise they will overwrite each other.

The above example renders as

example diagram
example diagram
plantUML needs graphviz dot installed to work. If you can’t install it, you can use the Java based version of the dot library. Just add !pragma graphviz_dot jdot as the first line of your diagram definition. This is still an experimental feature, but already works quite well!

3.2.2. Source

task generateHTML (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use html5 as asciidoc backend') {

    attributes \
        'plantUMLDir'         : ''

    sources {
        sourceFiles.findAll {
            'html' in it.formats
        }.each {
            include it.file

    backends = ['html5']

3.3. generatePDF


This task makes use of the asciidoctor-pdf plugin to render your documents as a pretty PDF file.

The file will be written to src/docs/pdf.

the used plugin is still in alpha status, but the results are already quite good. If you want to use another way to create a PDF, use phantomJS for instance and script it.

The PDF is generated directly from your AsciiDoc sources without the need of an intermediate format or other tools. The result looks more like a nicely rendered book than a print-to-pdf HTML page.

It is very likely that you need to "theme" you PDF - change colors, fonts, page header and footer. This can be done by changing the src/docs/custom-theme.yml file. Documentation on how to modify it can be found in the asciidoctor-pdf theming guide.

3.3.1. Source

task generatePDF (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use pdf as asciidoc backend') {

    attributes \
        'plantUMLDir'         : file("${docDir}/${config.outputPath}/images/plantUML/").path

    sources {
        sourceFiles.findAll {
            'pdf' in it.formats
        }.each {
            include it.file

    backends = ['pdf']

3.4. generateDocbook


This is only a helper task - it generates the intermediate format for convertToDocx and convertToEpub.

3.4.1. Source

task generateDocbook (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use docbook as asciidoc backend') {

    sources {
        sourceFiles.findAll {
            'docbook' in it.formats
        }.each {
            include it.file

    backends = ['docbook']

3.5. generateDeck


This task makes use of the asciidoctor-reveal.js backend to render your documents into a HTML based presentation.

This task is best used together with the exportPPT task. Create a PowerPoint presentation and enrich it with reveal.js slide definitions in AsciiDoc within the speaker notes.

3.5.1. Source

task generateDeck (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use revealJs as asciidoc backend to create a presentation') {

    attributes \
        'plantUMLDir'         : '',
        'idprefix': 'slide-',
        'idseparator': '-',
        'docinfo1': '',
        'revealjs_theme': 'black',
        'revealjs_progress': 'true',
        'revealjs_touch': 'true',
        'revealjs_hideAddressBar': 'true',
        'revealjs_transition': 'linear',
        'revealjs_history': 'true',
        'revealjs_slideNumber': 'true'

    options template_dirs : [new File('resources/asciidoctor-reveal.js','templates/slim').absolutePath ]

    sources {
        sourceFiles.findAll {
            'revealjs' in it.formats
        }.each {
            include it.file


    outputDir = file(targetDir+'/decks/')

    resources {
        from('resources') {
            include 'reveal.js/**'
        from(sourceDir) {
            include 'images/**'
        logger.error "${docDir}/${config.outputPath}/ppt/images"

3.6. publishToConfluence


3.6.1. Source

task publishToConfluence(
        description: 'publishes the HTML rendered output to confluence',
        group: 'docToolchain'
) {
    doLast {
        binding.setProperty('docDir', docDir)
        binding.setProperty('confluenceConfigFile', confluenceConfigFile)
        evaluate(new File('scripts/asciidoc2confluence.groovy'))
 * Created by Ralf D. Mueller and Alexander Heusingfeld
 * https://github.com/rdmueller/asciidoc2confluence
 * this script expects an HTML document created with AsciiDoctor
 * in the following style (default AsciiDoctor output)
 * <div class="sect1">
 *     <h2>Page Title</h2>
 *     <div class="sectionbody">
 *         <div class="sect2">
 *            <h3>Sub-Page Title</h3>
 *         </div>
 *         <div class="sect2">
 *            <h3>Sub-Page Title</h3>
 *         </div>
 *     </div>
 * </div>
 * <div class="sect1">
 *     <h2>Page Title</h2>
 *     ...
 * </div>

// some dependencies
         @Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.6' ),
import org.jsoup.Jsoup
import org.jsoup.parser.Parser
import org.jsoup.nodes.Entities.EscapeMode
import org.jsoup.nodes.Document
import org.jsoup.nodes.Document.OutputSettings
import org.jsoup.nodes.Element
import org.jsoup.select.Elements
import groovyx.net.http.RESTClient
import groovyx.net.http.HttpResponseException
import groovyx.net.http.HTTPBuilder
import groovyx.net.http.EncoderRegistry
import groovyx.net.http.ContentType
import java.security.MessageDigest
//to upload attachments:
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.entity.mime.content.InputStreamBody
import org.apache.http.entity.mime.HttpMultipartMode
import groovyx.net.http.Method

def CDATA_PLACEHOLDER_START = '<cdata-placeholder>'
def CDATA_PLACEHOLDER_END = '</cdata-placeholder>'

def baseUrl

// configuration
def config
println "docDir: ${docDir}"
println "confluenceConfigFile: ${confluenceConfigFile}"
config = new ConfigSlurper().parse(new File(docDir, confluenceConfigFile).text)

def confluenceSpaceKey
def confluenceCreateSubpages
def confluencePagePrefix

// helper functions

def MD5(String s) {

// for getting better error message from the REST-API
void trythis (Closure action) {
    try {
    } catch (HttpResponseException error) {
        println "something went wrong - got an http response code "+error.response.status+":"
        println error.response.data
        throw error

def parseAdmonitionBlock(block, String type) {
    content = block.select(".content").first()
    titleElement = content.select(".title")
    titleText = ''
    if(titleElement != null) {
        titleText = "<ac:parameter ac:name=\"title\">${titleElement.text()}</ac:parameter>"
    block.after("<ac:structured-macro ac:name=\"${type}\">${titleText}<ac:rich-text-body>${content}</ac:rich-text-body></ac:structured-macro>")

def uploadAttachment = { def pageId, String url, String fileName, String note ->
    def is
    def localHash
    if (url.startsWith('http')) {
        is = new URL(url).openStream()
        //build a hash of the attachment
        localHash = MD5(new URL(url).openStream().text)
    } else {
        is = new File(url).newDataInputStream()
        //build a hash of the attachment
        localHash = MD5(new File(url).newDataInputStream().text)

    def api = new RESTClient(config.confluenceAPI)
    //this fixes the encoding
    api.encoderRegistry = new EncoderRegistry( charset: 'utf-8' )

    def headers = [
            'Authorization': 'Basic ' + config.confluenceCredentials,
    //check if attachment already exists
    def result = "nothing"
    def attachment = api.get(path: 'content/' + pageId + '/child/attachment',
            query: [
                    'filename': fileName,
            ], headers: headers).data
    def http
    if (attachment.size==1) {
        // attachment exists. need an update?
        def remoteHash = attachment.results[0].extensions.comment.replaceAll("(?sm).*#([^#]+)#.*",'$1')
        if (remoteHash!=localHash) {
            //hash is different -> attachment needs to be updated
            http = new HTTPBuilder(config.confluenceAPI + 'content/' + pageId + '/child/attachment/' + attachment.results[0].id + '/data')
            println "    updated attachment"
    } else {
        http = new HTTPBuilder(config.confluenceAPI + 'content/' + pageId + '/child/attachment')
    if (http) {
        http.request(Method.POST) { req ->
            requestContentType: "multipart/form-data"
            MultipartEntity multiPartContent = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE)
            // Adding Multi-part file parameter "file"
            multiPartContent.addPart("file", new InputStreamBody(is, fileName))
            // Adding another string parameter "comment"
            multiPartContent.addPart("comment", new StringBody(note + "\r\n#" + localHash + "#"))
            headers.each { key, value ->
                req.addHeader(key, value)

def realTitle = { pageTitle ->
    confluencePagePrefix + pageTitle

def rewriteDescriptionLists = { body ->
    def TAGS = [ dt: 'th', dd: 'td' ]
    body.select('dl').each { dl ->
        // WHATWG allows wrapping dt/dd in divs, simply unwrap them
        dl.select('div').each { it.unwrap() }

        // group dts and dds that belong together, usually it will be a 1:1 relation
        // but HTML allows for different constellations
        def rows = []
        def current = [dt: [], dd: []]
        rows << current
        dl.select('dt, dd').each { child ->
            def tagName = child.tagName()
            if (tagName == 'dt' && current.dd.size() > 0) {
                // dt follows dd, start a new group
                current = [dt: [], dd: []]
                rows << current
            current[tagName] << child.tagName(TAGS[tagName])

        rows.each { row ->
            def sizes = [dt: row.dt.size(), dd: row.dd.size()]
            def rowspanIdx = [dt: -1, dd: sizes.dd - 1]
            def rowspan = Math.abs(sizes.dt - sizes.dd) + 1
            def max = sizes.dt
            if (sizes.dt < sizes.dd) {
                max = sizes.dd
                rowspanIdx = [dt: sizes.dt - 1, dd: -1]
            (0..<max).each { idx ->
                def tr = dl.appendElement('tr')
                ['dt', 'dd'].each { type ->
                    if (sizes[type] > idx) {
                        if (idx == rowspanIdx[type] && rowspan > 1) {
                            row[type][idx].attr('rowspan', "${rowspan}")
                    } else if (idx == 0) {
                        tr.appendElement(TAGS[type]).attr('rowspan', "${rowspan}")


def rewriteInternalLinks = { body, anchors, pageAnchors ->
    // find internal cross-references and replace them with link macros
    body.select('a[href]').each { a ->
        def href = a.attr('href')
        if (href.startsWith('#')) {
            def anchor = href.substring(1)
            def pageTitle = anchors[anchor] ?: pageAnchors[anchor]
            if (pageTitle) {
                // as Confluence insists on link texts to be contained
                // inside CDATA, we have to strip all HTML and
                // potentially loose styling that way.
                a.wrap("<ac:link${anchors.containsKey(anchor) ? ' ac:anchor="' + anchor + '"' : ''}></ac:link>")
                   .before("<ri:page ri:content-title=\"${realTitle pageTitle}\"/>")

def rewriteCodeblocks = { body ->
    body.select('pre > code').each { code ->
        if (code.attr('data-lang')) {
            code.select('span[class]').each { span ->
            code.before("<ac:parameter ac:name=\"language\">${code.attr('data-lang')}</ac:parameter>")
        code.parent() // pre now
            .wrap('<ac:structured-macro ac:name="code"></ac:structured-macro>')

def unescapeCDATASections = { html ->
    def start = html.indexOf(CDATA_PLACEHOLDER_START)
    while (start > -1) {
        def end = html.indexOf(CDATA_PLACEHOLDER_END, start)
        if (end > -1) {
            def prefix = html.substring(0, start) + CDATA_PLACEHOLDER_START
            def suffix = html.substring(end)
            def unescaped = html.substring(start + CDATA_PLACEHOLDER_START.length(), end)
                    .replaceAll('&lt;', '<').replaceAll('&gt;', '>')
            html = prefix + unescaped + suffix
        start = html.indexOf(CDATA_PLACEHOLDER_START, start + 1)

//modify local page in order to match the internal confluence storage representation a bit better
//definition lists are not displayed by confluence, so turn them into tables
//body can be of type Element or Elements
def deferredUpload = []
def parseBody =  { body, anchors, pageAnchors ->
    [   'note':'info',
        'tip':'tip'            ].each { adType, cType ->
        body.select('.admonitionblock.'+adType).each { block ->
            parseAdmonitionBlock(block, cType)
    //special for the arc42-template
            .wrap('<ac:structured-macro ac:name="expand"></ac:structured-macro>')
            .wrap('<ac:structured-macro ac:name="info"></ac:structured-macro>')
            .before('<ac:parameter ac:name="title">arc42</ac:parameter>')
    body.select('div.title').wrap("<strong></strong>").before("<br />").wrap("<div></div>")
    // see if we can find referenced images and fetch them
    new File("tmp/images/.").mkdirs()
    // find images, extract their URLs for later uploading (after we know the pageId) and replace them with this macro:
    // <ac:image ac:align="center" ac:width="500">
    // <ri:attachment ri:filename="deployment-context.png"/>
    // </ac:image>
    body.select('img').each { img ->
        img.attributes().each { attribute ->
            //println attribute.dump()
        def src = img.attr('src')
        def imgWidth = img.attr('width')?:500
        def imgAlign = img.attr('align')?:"center"
        println "    image: "+src

        //it is not an online image, so upload it to confluence and use the ri:attachment tag
        if(!src.startsWith("http")) {
          def newUrl = baseUrl.toString().replaceAll('\\\\','/').replaceAll('/[^/]*$','/')+src
          def fileName = (src.tokenize('/')[-1])

          trythis {
              deferredUpload <<  [0,newUrl,fileName,"automatically uploaded"]
          img.after("<ac:image ac:align=\"${imgAlign}\" ac:width=\"${imgWidth}\"><ri:attachment ri:filename=\"${fileName}\"/></ac:image>")
        // it is an online image, so we have to use the ri:url tag
        else {
          img.after("<ac:image ac:align=\"imgAlign\" ac:width=\"${imgWidth}\"><ri:url ri:value=\"${src}\"/></ac:image>")
    rewriteDescriptionLists body
    rewriteInternalLinks body, anchors, pageAnchors
    //sanitize code inside code tags
    rewriteCodeblocks body
    def pageString = unescapeCDATASections body.html().trim()

    //change some html elements through simple substitutions
    pageString = pageString
            .replaceAll('<br>','<br />')
            .replaceAll('</br>','<br />')

    return pageString

// the create-or-update functionality for confluence pages
def pushToConfluence = { pageTitle, pageBody, parentId, anchors, pageAnchors ->
    def api = new RESTClient(config.confluenceAPI)
    def headers = [
            'Authorization': 'Basic ' + config.confluenceCredentials,
            'Content-Type':'application/json; charset=utf-8'
    //this fixes the encoding
    api.encoderRegistry = new EncoderRegistry( charset: 'utf-8' )
    //try to get an existing page
    localPage = parseBody(pageBody, anchors, pageAnchors)

    def localHash = MD5(localPage)
    def prefix = '<p><ac:structured-macro ac:name="toc"/></p>'+(config.extraPageContent?:'')
    localPage  = prefix+localPage
    localPage += '<p><ac:structured-macro ac:name="children"><ac:parameter ac:name="sort">creation</ac:parameter></ac:structured-macro></p>'
    localPage += '<p style="display:none">hash: #'+localHash+'#</p>'

    def request = [
            type : 'page',
            title: realTitle(pageTitle),
            space: [
                    key: confluenceSpaceKey
            body : [
                    storage: [
                            value         : localPage,
                            representation: 'storage'
    if (parentId) {
        request.ancestors = [
                [ type: 'page', id: parentId]

    def pages
    trythis {
        def cql = "space='${confluenceSpaceKey}' AND type=page AND title~'" + realTitle(pageTitle) + "'"
        if (parentId) {
            cql += " AND parent=${parentId}"
        pages = api.get(path: 'content/search',
                        query: ['cql' : cql,
                                'expand'  : 'body.storage,version'
                               ], headers: headers).data.results

    def page = pages.find { p -> p.title.equalsIgnoreCase(realTitle(pageTitle)) }

    if (page) {
        //println "found existing page: " + page.id +" version "+page.version.number

        //extract hash from remote page to see if it is different from local one

        def remotePage = page.body.storage.value.toString().trim()

        def remoteHash = remotePage =~ /(?ms)hash: #([^#]+)#/
        remoteHash = remoteHash.size()==0?"":remoteHash[0][1]

        if (remoteHash == localHash) {
            //println "page hasn't changed!"
            deferredUpload.each {
                uploadAttachment(page?.id, it[1], it[2], it[3])
            deferredUpload = []
            return page.id
        } else {
            trythis {
                // update page
                // https://developer.atlassian.com/display/CONFDEV/Confluence+REST+API+Examples#ConfluenceRESTAPIExamples-Updatingapage
                request.id      = page.id
                request.version = [number: (page.version.number as Integer) + 1]
                def res = api.put(contentType: ContentType.JSON,
                                  requestContentType : ContentType.JSON,
                                  path: 'content/' + page.id, body: request, headers: headers)
            println "> updated page"+page.id
            deferredUpload.each {
                uploadAttachment(page.id, it[1], it[2], it[3])
            deferredUpload = []
            return page.id
    } else {
        if (parentId) {
            def foreignPages
            trythis {
                def foreignCql = "space='${confluenceSpaceKey}' AND type=page AND title~'" + realTitle(pageTitle) + "'"
                foreignPages = api.get(path: 'content/search',
                                      query: ['cql' : foreignCql],
                                    headers: headers).data.results

            def foreignPage = foreignPages.find { p -> p.title.equalsIgnoreCase(realTitle(pageTitle)) }

            if (foreignPage) {
                throw new IllegalArgumentException("Cannot create page, page with the same "
                    + "title=${foreignPage.title} "
                    + "and id=${foreignPage.id} already exists in the space")

        //create a page
        trythis {
            page = api.post(contentType: ContentType.JSON,
                            requestContentType: ContentType.JSON,
                            path: 'content', body: request, headers: headers)
        println "> created page "+page?.data?.id
        deferredUpload.each {
            uploadAttachment(page?.data?.id, it[1], it[2], it[3])
        deferredUpload = []
        return page?.data?.id

def parseAnchors = { page ->
    def anchors = [:]
    page.body.select('[id]').each { anchor ->
        def name = anchor.attr('id')
        anchors[name] = page.title
        anchor.before("<ac:structured-macro ac:name=\"anchor\"><ac:parameter ac:name=\"\">${name}</ac:parameter></ac:structured-macro>")

def pushPages
pushPages = { pages, anchors, pageAnchors ->
    pages.each { page ->
        println page.title
        def id = pushToConfluence page.title, page.body, page.parent, anchors, pageAnchors
        page.children*.parent = id
        pushPages page.children, anchors, pageAnchors

def recordPageAnchor = { head ->
    def a = [:]
    if (head.attr('id')) {
        a[head.attr('id')] = head.text()

def promoteHeaders = { tree, start, offset ->
    (start..7).each { i ->
        tree.select("h${i}").tagName("h${i-offset}").before('<br />')

config.input.each { input ->

    input.file = "${docDir}/${input.file}"

    if (input.file ==~ /.*[.](ad|adoc|asciidoc)$/) {
        println "convert ${input.file}"
        "groovy asciidoc2html.groovy ${input.file}".execute()
        input.file = input.file.replaceAll(/[.](ad|adoc|asciidoc)$/,'.html')
        println "to ${input.file}"
    confluenceSpaceKey = input.spaceKey?:config.confluenceSpaceKey
    confluenceCreateSubpages = (input.createSubpages!= null)?input.createSubpages:config.confluenceCreateSubpages
    confluencePagePrefix = input.pagePrefix?:config.confluencePagePrefix

    def html =input.file?new File(input.file).getText('utf-8'):new URL(input.url).getText()
    baseUrl  =input.file?new File(input.file):new URL(input.url)
    Document dom = Jsoup.parse(html, 'utf-8', Parser.xmlParser())
    dom.outputSettings().prettyPrint(false);//makes html() preserve linebreaks and spacing
    dom.outputSettings().escapeMode(org.jsoup.nodes.Entities.EscapeMode.xhtml); //This will ensure xhtml validity regarding entities
    dom.outputSettings().charset("UTF-8"); //does no harm :-)
    def masterid = input.ancestorId

    // if confluenceAncestorId is not set, create a new parent page
    def parentId = !input.ancestorId ? null : input.ancestorId
    def anchors = [:]
    def pageAnchors = [:]
    def sections = pages = []

    // let's try to select the "first page" and push it to confluence
    dom.select('div#preamble div.sectionbody').each { pageBody ->
        def preamble = [
            title: input.preambleTitle ?: "arc42",
            body: pageBody,
            children: [],
            parent: parentId
        pages << preamble
        sections = preamble.children
        parentId = null
    // <div class="sect1"> are the main headings
    // let's extract these
    dom.select('div.sect1').each { sect1 ->
        Elements pageBody = sect1.select('div.sectionbody')
        def currentPage = [
            title: sect1.select('h2').text(),
            body: pageBody,
            children: [],
            parent: parentId

        if (confluenceCreateSubpages) {
            pageBody.select('div.sect2').each { sect2 ->
                def title = sect2.select('h3').text()
                def body = sect2
                def subPage = [
                    title: title,
                    body: body
                currentPage.children << subPage
                promoteHeaders sect2, 4, 3
        } else {
            promoteHeaders sect1, 3, 2
        sections << currentPage

    pushPages pages, anchors, pageAnchors

3.7. convertToDocx


3.7.1. Source

task convertToDocx (
        group: 'docToolchain',
        type: Exec
) {
    // For now it's only taking the first input file that has docbook format specified
    def sourceFile = sourceFiles.find { 'docbook' in it.formats }.file.replace('.adoc', '.xml')
    def targetFile = sourceFile.replace('.xml', '.docx')

    workingDir "$targetDir/docbook"
    executable = "pandoc"
    new File("$targetDir/docx/").mkdirs()
    args = ['-r','docbook',

3.8. convertToEpub


Dependency: [generateDocBook]

This task uses pandoc to convert the DocBook output from AsciiDoctor to ePub. This way, you can read your documentation in a convenient way on an eBook-reader.

Result can be found in build/docs/epub

3.8.1. Source

task convertToEpub (
        group: 'docToolchain',
        type: Exec
) {
    // For now it's only taking the first input file that has docbook format specified
    def sourceFile = sourceFiles.find { 'docbook' in it.formats }.file.replace('.adoc', '.xml')
    def targetFile = sourceFile.replace('.xml', '.epub')

    workingDir "$targetDir/docbook"
    executable = "pandoc"
    new File("$targetDir/epub/").mkdirs()
    args = ['-r','docbook',

3.9. exportEA


3.9.1. Source

task exportEA(
        dependsOn: [streamingExecute],
        description: 'exports all diagrams and some texts from EA files',
        group: 'docToolchain'
) {
    doLast {
        //make sure path for notes exists
        //and remove old notes
        new File('src/docs/ea').deleteDir()
        //also remove old diagrams
        new File('src/docs/images/ea').deleteDir()
        //create a readme to clarify things
        def readme = """This folder contains exported diagrams or notes from Enterprise Architect.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder        will be overwritten with each re-export!**

use `gradle exportEA` to re-export files
        new File('src/docs/images/ea/.').mkdirs()
        new File('src/docs/images/ea/readme.ad').write(readme)
        new File('src/docs/ea/.').mkdirs()
        new File('src/docs/ea/readme.ad').write(readme)
        //execute through cscript in order to make sure that we get WScript.echo right
        "%SystemRoot%\\System32\\cscript.exe //nologo scripts/exportEAP.vbs".executeCmd()
        //the VB Script is only capable of writing iso-8859-1-Files.
        //we now have to convert them to UTF-8
        new File('src/docs/ea/.').eachFileRecurse { file ->
            if (file.isFile()) {
                println "exported notes " + file.canonicalPath
                file.write(file.getText('iso-8859-1'), 'utf-8')
    ' based on the "Project Interface Example" which comes with EA
    ' http://stackoverflow.com/questions/1441479/automated-method-to-export-enterprise-architect-diagrams

    Dim EAapp 'As EA.App
    Dim Repository 'As EA.Repository
    Dim FS 'As Scripting.FileSystemObject

    Dim projectInterface 'As EA.Project

    Const   ForAppending = 8

    ' Helper
    ' http://windowsitpro.com/windows/jsi-tip-10441-how-can-vbscript-create-multiple-folders-path-mkdir-command
    Function MakeDir (strPath)
      Dim strParentPath, objFSO
      Set objFSO = CreateObject("Scripting.FileSystemObject")
      On Error Resume Next
      strParentPath = objFSO.GetParentFolderName(strPath)

      If Not objFSO.FolderExists(strParentPath) Then MakeDir strParentPath
      If Not objFSO.FolderExists(strPath) Then objFSO.CreateFolder strPath
      On Error Goto 0
      MakeDir = objFSO.FolderExists(strPath)

    End Function

    Sub WriteNote(currentModel, currentElement, notes, prefix)
        If (Left(notes, 6) = "{adoc:") Then
            strFileName = Mid(notes,7,InStr(notes,"}")-7)
            strNotes = Right(notes,Len(notes)-InStr(notes,"}"))
            set objFSO = CreateObject("Scripting.FileSystemObject")
            If (currentModel.Name="Model") Then
              ' When we work with the default model, we don't need a sub directory
              path = "./src/docs/ea/"
              path = "./src/docs/ea/"&currentModel.Name&"/"
            End If
            ' WScript.echo path&strFileName
            post = ""
            If (prefix<>"") Then
                post = "_"
            End If
            set objFile = objFSO.OpenTextFile(path&prefix&post&strFileName&".ad",ForAppending, True)
            name = currentElement.Name
            name = Replace(name,vbCr,"")
            name = Replace(name,vbLf,"")
            ' WScript.echo "-"&Left(strNotes, 6)&"-"
            if (Left(strNotes, 3) = vbCRLF&"|") Then
                ' content should be rendered as table - so don't interfere with it
                'let's add the name of the object
            End If
            if (prefix<>"") Then
                ' write the same to a second file
                set objFile = objFSO.OpenTextFile(path&prefix&".ad",ForAppending, True)
            End If
        End If
    End Sub

    Sub SyncJira(currentModel, currentDiagram)
        notes = currentDiagram.notes
        set currentPackage = Repository.GetPackageByID(currentDiagram.PackageID)
        updated = 0
        created = 0
        If (Left(notes, 6) = "{jira:") Then
            WScript.echo " >>>> Diagram jira tag found"
            strSearch = Mid(notes,7,InStr(notes,"}")-7)
            Set objShell = CreateObject("WScript.Shell")
            'objShell.CurrentDirectory = fso.GetFolder("./scripts")
            Set objExecObject = objShell.Exec ("cmd /K  groovy ./scripts/exportJira.groovy """ & strSearch &""" & exit")
            strReturn = ""
            x = 0
            y = 0
            Do While Not objExecObject.StdOut.AtEndOfStream
                output = objExecObject.StdOut.ReadLine()
                ' WScript.echo output
                jiraElement = Split(output,"|")
                name = jiraElement(0)&":"&vbCR&vbLF&jiraElement(4)
                On Error Resume Next
                Set requirement = currentPackage.Elements.GetByName(name)
                On Error Goto 0
                if (IsObject(requirement)) then
                    ' element already exists
                    requirement.notes = ""
                    requirement.notes = requirement.notes&"<a href='"&jiraElement(5)&"'>"&jiraElement(0)&"</a>"&vbCR&vbLF
                    requirement.notes = requirement.notes&"Priority: "&jiraElement(1)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Created: "&jiraElement(2)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Assignee: "&jiraElement(3)&vbCR&vbLF
                    updated = updated + 1
                    Set requirement = currentPackage.Elements.AddNew(name,"Requirement")
                    requirement.notes = ""
                    requirement.notes = requirement.notes&"<a href='"&jiraElement(5)&"'>"&jiraElement(0)&"</a>"&vbCR&vbLF
                    requirement.notes = requirement.notes&"Priority: "&jiraElement(1)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Created: "&jiraElement(2)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Assignee: "&jiraElement(3)&vbCR&vbLF
                    Set dia_obj = currentDiagram.DiagramObjects.AddNew("l="&(10+x*200)&";t="&(10+y*50)&";b="&(10+y*50+44)&";r="&(10+x*200+180),"")
                    x = x + 1
                    if (x>3) then
                      x = 0
                      y = y + 1
                    end if
                    dia_obj.ElementID = requirement.ElementID
                    created = created + 1
                end if
            Set objShell = Nothing
            WScript.echo "created "&created&" requirements"
            WScript.echo "updated "&updated&" requirements"
        End If
    End Sub

    Sub SaveDiagram(currentModel, currentDiagram)
                ' Open the diagram

            ' Save and close the diagram
            If (currentModel.Name="Model") Then
                ' When we work with the default model, we don't need a sub directory
                path = "/src/docs/images/ea/"
                path = "/src/docs/images/ea/" & currentModel.Name & "/"
            End If
            diagramName = Replace(currentDiagram.Name," ","_")
            diagramName = Replace(diagramName,vbCr,"")
            diagramName = Replace(diagramName,vbLf,"")
            filename = path & diagramName & ".png"
            MakeDir("." & path)
            ' projectInterface.putDiagramImageToFile currentDiagram.DiagramID,fso.GetAbsolutePathName(".")&filename,1
            WScript.echo " extracted image to ." & filename
            For Each diagramElement In currentDiagram.DiagramObjects
                Set currentElement = Repository.GetElementByID(diagramElement.ElementID)
                WriteNote currentModel, currentElement, currentElement.Notes, diagramName&"_notes"
            For Each diagramLink In currentDiagram.DiagramLinks
                set currentConnector = Repository.GetConnectorByID(diagramLink.ConnectorID)
                WriteNote currentModel, currentConnector, currentConnector.Notes, diagramName&"_links"
    End Sub
    ' Recursively saves all diagrams under the provided package and its children
    Sub DumpDiagrams(thePackage,currentModel)

        Set currentPackage = thePackage

        ' export element notes
        For Each currentElement In currentPackage.Elements
            WriteNote currentModel, currentElement, currentElement.Notes, ""
            ' export connector notes
            For Each currentConnector In currentElement.Connectors
                ' WScript.echo currentConnector.ConnectorGUID
                if (currentConnector.ClientID=currentElement.ElementID) Then
                    WriteNote currentModel, currentConnector, currentConnector.Notes, ""
                End If
            if (Not currentElement.CompositeDiagram Is Nothing) Then
                SyncJira currentModel, currentElement.CompositeDiagram
                SaveDiagram currentModel, currentElement.CompositeDiagram
            End If
            if (Not currentElement.Elements Is Nothing) Then
                DumpDiagrams currentElement,currentModel
            End If

        ' Iterate through all diagrams in the current package
        For Each currentDiagram In currentPackage.Diagrams
            SyncJira currentModel, currentDiagram
            SaveDiagram currentModel, currentDiagram

        ' Process child packages
        Dim childPackage 'as EA.Package
        ' otPackage = 5
        if (currentPackage.ObjectType = 5) Then
            For Each childPackage In currentPackage.Packages
                call DumpDiagrams(childPackage, currentModel)
        End If
    End Sub

		Function SearchEAProjects(path)

		  For Each folder In path.SubFolders
		    SearchEAProjects folder

		  For Each file In path.Files
				If fso.GetExtensionName (file.Path) = "eap" Then
					WScript.echo "found "&file.path
				End If

    End Function

    Sub OpenProject(file)
      ' open Enterprise Architect
      Set EAapp = CreateObject("EA.App")
      WScript.echo "opening Enterprise Architect. This might take a moment..."
      ' load project
      ' make Enterprise Architect to not appear on screen
      EAapp.Visible = False

      ' get repository object
      Set Repository = EAapp.Repository
      ' Show the script output window
      ' Repository.EnsureOutputVisible("Script")

      Set projectInterface = Repository.GetProjectInterface()

      ' Iterate through all model nodes
      Dim currentModel 'As EA.Package
      For Each currentModel In Repository.Models
        ' Iterate through all child packages and save out their diagrams
        Dim childPackage 'As EA.Package
        For Each childPackage In currentModel.Packages
          call DumpDiagrams(childPackage,currentModel)
    End Sub

  set fso = CreateObject("Scripting.fileSystemObject")
  WScript.echo "Image extractor"
  WScript.echo "looking for .eap files in " & fso.GetAbsolutePathName(".") & "/src"
  'Dim f As Scripting.Files
  SearchEAProjects fso.GetFolder("./src")
  WScript.echo "finished exporting images"

3.10. exportVisio


This tasks searches for visio files in the /src/docs folder. It then exports all diagrams and element notes to /src/docs/images/visio and /src/docs/visio.

  • Images are stored as /images/visio/[filename]-[pagename].png

  • Notes are stored as /visio/[filename]-[pagename].adoc

You can specify a file name to which the notes of a diagram are exported by starting any comment with {adoc:[filename].adoc}`. It will then be written to /viso/[filename].adoc.

currently, only visio files stored directly in /src/docs are supported. For all others, the exported files will be in the wrong location.
please close any running visio instance before starting this task.
Todos: issue #112

3.10.1. Source

task exportVisio(
        dependsOn: [streamingExecute],
        description: 'exports all diagrams and notes from visio files',
        group: 'docToolchain'
) {
    doLast {
        //make sure path for notes exists
        //and remove old notes
        new File('src/docs/visio').deleteDir()
        //also remove old diagrams
        new File('src/docs/images/visio').deleteDir()
        //create a readme to clarify things
        def readme = """This folder contains exported diagrams and notes from visio files.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder        will be overwritten with each re-export!**

use `gradle exportVisio` to re-export files
        new File('src/docs/images/visio/.').mkdirs()
        new File('src/docs/images/visio/readme.ad').write(readme)
        new File('src/docs/visio/.').mkdirs()
        new File('src/docs/visio/readme.ad').write(readme)
        def sourcePath = new File('src/docs/.').canonicalPath
        def scriptPath = new File('scripts/VisioPageToPngConverter.ps1').canonicalPath
        "powershell ${scriptPath} -SourcePath ${sourcePath}".executeCmd()
# Convert all pages in all visio files in the given directory to png files.
# A Visio windows might flash shortly.
# The converted png files are stored in the same directory
# The name of the png file is concatenated from the Visio file name and the page name.
# In addtion all the comments are stored in adoc files.
# If the Viso file is named "MyVisio.vsdx" and the page is called "FirstPage"
# the name of the png file will be "MyVisio-FirstPage.png" and the comment will
# be stored in "MyVisio-FirstPage.adoc".
# But for the name of the adoc files there is an alternative. It can be given in the first
# line of the comment. If it is given in the comment it has to be given in curly brackes
# with the prefix "adoc:", e.g. {adoc:MyCommentFile.adoc}
# Prerequisites: Viso and PowerShell has to be installed on the computer.
# Parameter: SourcePath where visio files can be found
# Example powershell VisoPageToPngConverter.ps1 -SourcePath c:\convertertest\


If (!(Test-Path -Path $SourcePath))
    Write-Warning "The path ""$SourcePath"" does not exist or is not accessible, please input the correct path."

# Extend the source path to get only Visio files of the given directory and not in subdircetories
If ($SourcePath.EndsWith("\"))
    $SourcePath  = "$SourcePath*"
    $SourcePath  = "$SourcePath\*"

$VisioFiles = Get-ChildItem -Path $SourcePath -Include *.vsdx,*.vssx,*.vstx,*.vxdm,*.vssm,*.vstm,*.vsd,*.vdw,*.vss,*.vst

    Write-Warning "There are no Visio files in the path ""$SourcePath""."

$VisioApp = New-Object -ComObject Visio.Application
$VisioApp.Visible = $false

# Extract the png from all the files in the folder
Foreach($File in $VisioFiles)
    $FilePath = $File.FullName
               $FileDirectory = $File.DirectoryName   # Get the folder containing the Visio file. Will be used to store the png and adoc files
    $FileBaseName = $File.BaseName    # Get the filename to be used as part of the name of the png and adoc files

        $Document = $VisioApp.Documents.Open($FilePath)
        $Pages = $VisioApp.ActiveDocument.Pages
        Foreach($Page in $Pages)
            # Create valid filenames for the png and adoc files
            $PngFileName = $Page.Name -replace '[:/\\*?|<>]','-'
            $PngFileName = "$FileBaseName-$PngFileName.png"
            $AdocFileName = $PngFileName.Replace(".png", ".adoc")

            #TODO: this needs better logic

            $AllPageComments = ""
            ForEach($PageComment in $Page.Comments)
                # Extract adoc filename from comment text if the syntax is valid
                # Remove the filename from the text and save the comment in a file with a valid name
                $EofStringIndex = $PageComment.Text.IndexOf(".adoc}")
                if ($PageComment.Text.StartsWith("{adoc") -And ($EofStringIndex -gt 6))
                    $AdocFileName = $PageComment.Text.Substring(6, $EofStringIndex -1)
                    $AllPageComments += $PageComment.Text.Substring($EofStringIndex + 6)
                    $AllPageComments += $PageComment.Text+"`n"
            If ($AllPageComments)

                $AdocFileName = $AdocFileName -replace '[:/\\*?|<>]','-'
                #TODO: this needs better logic
                $stream = [System.IO.StreamWriter] "$FileDirectory\visio\$AdocFileName"
        if ($Document)
        Write-Warning "One or more visio page(s) in file ""$FilePath"" have been lost in this converting."
        Write-Warning "Error was: $_"

3.11. exportChangeLog


As the name says, this task exports the changelog to be referenced from within your documentation - if needed. The changelog is written to build/docs/changelog.adoc.

This task can be configured to use different source control system or different directory. To configure the task, copy template_config/scripts/ChangelogConfig.groovy to your directory and modify to your needs. Then give the path to your configuration file to the task using -PchangelogConfigFile=<your config file>. See the description inside the template for more details.

By default the source is the Git changelog for the path src/docs - it only contains the commit messages for changes on the documentation. All changes on the build or other sources from the repository will not show up. By default the changelog contains the changes with date, author and commit message already formatted as AsciiDoc table content:

| 09.04.2017
| Ralf D. Mueller
| fix #24 template updated to V7.0

| 08.04.2017
| Ralf D. Mueller
| fixed typo

You simply include it like this:

| Date
| Author
| Comment



By excluding the table definition, you can easily translate the table headings through different text snippets.

it might make sense to only include certain commit messages from the change log or exclude others (starting with # or //?). But this isn’t implemented yet.

3.11.1. Source

task exportChangeLog(
        description: 'exports the change log from a git subpath',
        group: 'docToolchain'
) doLast {
    println "changelogConfigFile: ${changelogConfigFile}"
    def config
    config = new ConfigSlurper().parse(new File(docDir, changelogConfigFile).text)

    def cmd = "${config.changelogCmd} ."
    def changes = cmd.execute(null, new File(docDir, config.changelogDir)).text
    new File(targetDir).mkdirs()
    def changelog = new File(targetDir, 'changelog.adoc')
    logger.info "> changelog exported"

3.12. exportJiraIssues


3.12.1. Source

task exportJiraIssues(
        description: 'exports all jira issues from a given search',
        group: 'docToolchain'
) {
    doLast {
        def user = jiraUser
        def pass = jiraPass
        if (!pass) {
            pass = System.console().readPassword("Jira password for user '$user': ")

        def stats = [:]
        def jira = new groovyx.net.http.RESTClient(jiraRoot + '/rest/api/2/')
        jira.encoderRegistry = new groovyx.net.http.EncoderRegistry(charset: 'utf-8')
        def headers = [
                'Authorization': "Basic " + "${user}:${pass}".bytes.encodeBase64().toString(),
                'Content-Type' : 'application/json; charset=utf-8'
        def openIssues = new File(targetDir, 'openissues.adoc')
        openIssues.write("", 'utf-8')
        println jiraJql.replaceAll('%jiraProject%', jiraProject).replaceAll('%jiraLabel%', jiraLabel)
        jira.get(path: 'search',
                query: ['jql'       : jiraJql.replaceAll('%jiraProject%', jiraProject).replaceAll('%jiraLabel%', jiraLabel),
                        'maxResults': 1000,
                        'fields'    : 'created,resolutiondate,priority,summary,timeoriginalestimate, assignee'
                headers: headers
        ).data.issues.each { issue ->
            openIssues.append("| <<${issue.key}>> ", 'utf-8')
            openIssues.append("| ${issue.fields.priority.name} ", 'utf-8')
            openIssues.append("| ${Date.parse("yyyy-MM-dd'T'H:m:s.000z", issue.fields.created).format('dd.MM.yy')} ", 'utf-8')
            openIssues.append("| ${issue.fields.assignee ? issue.fields.assignee.displayName : 'not assigned'} ", 'utf-8')
            openIssues.append("| ${jiraRoot}/browse/${issue.key}[${issue.fields.summary}]\n", 'utf-8')

3.13. exportPPT


3.13.1. Source

task exportPPT(
        dependsOn: [streamingExecute],
        description: 'exports all slides and some texts from PPT files',
        group: 'docToolchain'
) {
    doLast {
        //make sure path for notes exists
        //and remove old notes
        new File('src/docs/ppt').deleteDir()
        //also remove old diagrams
        new File('src/docs/images/ppt').deleteDir()
        //create a readme to clarify things
        def readme = """This folder contains exported slides or notes from .ppt presentations.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder        will be overwritten with each re-export!**

use `gradle exportPPT` to re-export files
        new File('src/docs/images/ppt/.').mkdirs()
        new File('src/docs/images/ppt/readme.ad').write(readme)
        new File('src/docs/ppt/.').mkdirs()
        new File('src/docs/ppt/readme.ad').write(readme)
        //execute through cscript in order to make sure that we get WScript.echo right
        "%SystemRoot%\\System32\\cscript.exe //nologo scripts/exportPPT.vbs".executeCmd()

3.14. exportExcel


Sometimes you have tabular data to be included in your documentation. Then it is likely that the data is available as excel sheet or you would like to use MS Excel to create and edit it.

Either way, this task lets you export your excel sheet and include it directly in your docs.

The task searches for .xlsx files and exports each contained worksheet as .csv and as .adoc.

Formulas contained in your worksheet are evaluated and exported statically.

The generated files are written to src/excel/[filename]/[worksheet].(adoc|cvs) . The src folder is chosen over the build folder to get a better history for all changes on the worksheets.

The files can be included either as AsciiDoc


or as CSV file


The AsciiDoc version gives you a bit more control:

  • horizontal and vertical alignment is preserved

  • line breaks are preserved

  • column width relative to other columns is preserved

  • background colors are preserved.

3.14.1. Source

task exportExcel(
        description: 'exports all excelsheets to csv and AsciiDoc',
        group: 'docToolchain'
) {
    doLast {
        File sourceDir = file(srcDir)

        def tree = fileTree(srcDir).include('**/*.xlsx').exclude('**/~*')

        def exportFileDir = new File(sourceDir, 'excel')

        //make sure path for notes exists
        //create a readme to clarify things
        def readme = """This folder contains exported workbooks from Excel.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder will be overwritten with each re-export!**

use `gradle exportExcel` to re-export files
        new File(exportFileDir, '/readme.ad').write(readme)

        def nl = System.getProperty("line.separator")

        def export = { sheet, evaluator, targetFileName ->
            def targetFileCSV = new File(targetFileName + '.csv')
            def targetFileAD = new File(targetFileName + '.adoc')
            def df = new org.apache.poi.ss.usermodel.DataFormatter();
            def regions = []
            sheet.numMergedRegions.times {
                regions << sheet.getMergedRegion(it)
            logger.debug "sheet contains ${regions.size()} regions"
            def color = ''
            def resetColor = false
            def numCols = 0
            def headerCreated = false
            def emptyRows = 0
            for (int rowNum=0; rowNum<=sheet.lastRowNum; rowNum++) {
                def row = sheet.getRow(rowNum)
                if (row && !headerCreated) {
                    headerCreated = true
                    // create AsciiDoc table header
                    def width = []
                    numCols = row.lastCellNum
                    numCols.times { columnIndex ->
                        width << sheet.getColumnWidth((int) columnIndex)
                    //lets make those numbers nicer:
                    width = width.collect { Math.round(100 * it / width.sum()) }
                    targetFileAD.append('[options="header",cols="' + width.join(',') + '"]' + nl)
                    targetFileAD.append('|===' + nl)
                def data = []
                def style = []
                def colors = []
                // For each row, iterate through each columns
                if (row && (row?.lastCellNum!=-1)) {
                    numCols.times { columnIndex ->
                        def cell = row.getCell(columnIndex)
                        if (cell) {
                            def cellValue = df.formatCellValue(cell, evaluator)
                            if (cellValue.startsWith('*') && cellValue.endsWith('\u20AC')) {
                                // Remove special characters at currency
                                cellValue = cellValue.substring(1).trim();
                            def cellStyle = ''
                            def region = regions.find { it.isInRange(cell.rowIndex, cell.columnIndex) }
                            def skipCell = false
                            if (region) {
                                //check if we are in the upper left corner of the region
                                if (region.firstRow == cell.rowIndex && region.firstColumn == cell.columnIndex) {
                                    def colspan = 1 + region.lastRow - region.firstRow
                                    def rowspan = 1 + region.lastColumn - region.firstColumn
                                    if (rowspan > 1) {
                                        cellStyle += "${rowspan}"
                                    if (colspan > 1) {
                                        cellStyle += ".${colspan}"
                                    cellStyle += "+"
                                } else {
                                    skipCell = true
                            if (!skipCell) {
                                switch (cell.cellStyle.alignmentEnum.toString()) {
                                    case 'RIGHT':
                                        cellStyle += '>'
                                    case 'CENTER':
                                        cellStyle += '^'
                                switch (cell.cellStyle.verticalAlignmentEnum.toString()) {
                                    case 'BOTTOM':
                                        cellStyle += '.>'
                                    case 'CENTER':
                                        cellStyle += '.^'
                                color = cell.cellStyle.fillForegroundXSSFColor?.rgb?.encodeHex()
                                color = color != null ? nl + "{set:cellbgcolor:#${color}}" : ''
                                data << cellValue
                                if (color == '' && resetColor) {
                                    colors << nl + "{set:cellbgcolor!}"
                                    resetColor = false
                                } else {
                                    colors << color
                                if (color != '') {
                                    resetColor = true
                                style << cellStyle
                            } else {
                                data << ""
                                colors << ""
                                style << "skip"
                        } else {
                            data << ""
                            colors << ""
                            style << ""

                    emptyRows = 0
                } else {
                    if (emptyRows<3) {
                        //insert empty row
                        numCols.times {
                            data << ""
                            colors << ""
                            style << ""
                    } else {

                        .collect {
                    "\"${it.replaceAll('"', '""')}\""
                .join(',') + nl, 'UTF-8')
                        .collect { value, index ->
                    if (style[index] == "skip") {
                    } else {
                        style[index] + "| ${value.replaceAll('[|]', '{vbar}').replaceAll("\n", ' +$0') + colors[index]}"
                .join(nl) + nl * 2, 'UTF-8')
            targetFileAD.append('|===' + nl)

        tree.each { File excel ->
            println excel
            def excelDir = new File(exportFileDir, excel.getName())
            InputStream inp
            inp = new FileInputStream(excel)
            def wb = org.apache.poi.ss.usermodel.WorkbookFactory.create(inp);
            def evaluator = wb.getCreationHelper().createFormulaEvaluator();
            for (int wbi = 0; wbi < wb.getNumberOfSheets(); wbi++) {
                def sheetName = wb.getSheetAt(wbi).getSheetName()
                println sheetName
                def targetFile = new File(excelDir, sheetName)
                export(wb.getSheetAt(wbi), evaluator, targetFile.getAbsolutePath())

3.15. htmlSanityCheck


This task invokes the htmlSanityCheck gradle plugin. It is a standalone (batch- and command-line) html sanity checker - it detects missing images, dead links, and duplicate bookmarks.

In docToolchain, this task is used to ensure that the generated HTML contains no missing links or other problems.

This task is the last default task and creates a report in build/report/htmlchecks/index.html

Figure 2. sample report

Further information can be found on github: https://github.com/aim42/htmlSanityCheck

3.15.1. Source

htmlSanityCheck {
    sourceDir = new File("$targetDir/html5")

    // files to check - in Set-notation
    //sourceDocuments = [ "one-file.html", "another-file.html", "index.html"]

    // where to put results of sanityChecks...
    checkingResultsDir = new File( checkingResultsPath )
    checkExternalLinks = false

3.16. dependencyUpdates

This task uses the Gradle versions plugin created by Ben Manes to check for outdated build dependencies. Quite helpful to keep all dependencies up-to-date.

4. FAQ: Frequently asked Questions

This section tries to answer the most common and frequently asked questions about how to work with docToolchain. It will also contain questions relevant to the tools used to build docToolchain, but the main focus is docToolchain itself.

If you are stuck, make sure that you also check other sources like stackoverflow.

There is also a great FAQ for all your arc42 questions: http://faq.arc42.org/home/

If you have a question or problem for which you can’t find a solution, you can

4.1. Images

4.1.1. Q: Why are images not shown in the preview of my editor?

A: this is most likely because your editor doesn’t know where they are stored. If you follow the default settings, you probably store your images in a subfolder images. The build script knows about it, because the attribute imagesdir has been set to ./images, but your editor doesn’t care about the build script - it only checks the currently opened AsciiDoc file.

The solution is to add a line to each file which checks if the imagesdir is set and if not, sets it to a valid value:

ifndef::imagesdir[:imagesdir: ../images]

4.1.2. Q: Which image format should I use?

A: AsciiDoc and asciidoctor support several formats like GIF, PNG, JPG and SVG. However, if you want to use most features, some formats are better to use than others:


is not supported by the PDF renderer. Use JPG or PNG instead.


is great for photos but not for digrams (you might get compression artefacts). So, if you want to use photos from your flipcharts - JPG might work for you.


great for high resolution diagrams, but not good supported by DOCX as output format. OpenOffice Writer might display the image a bit stretched, MS Word didn’t display it at all in some experiments.


this is the preferred format for images used with docToolchain. All output formats support it and if diagrams are rendered with a resolution high enough to display all details, it will also be scaled ok with all output formats.

4.1.3. Q: Why are my images rotated in the output?

A: This most likely happens when you’ve taken photos with a mobile device and include them in you docs. A mobile device does not rotate the image itself, it only stores the orientation of the device in the meta data of the photo. Your operating system will show you the image as expected, but the rendered AsciiDoc will not. This can be „fixed“ with Imagemagick, use convert -auto-orient or mogrify -auto-orient (thanx to @rotnroll666 for this tip). YOu can also try to just open the image in your favourite editor and re-save it.

4.2. exportVisio

4.2.1. Q: I get an error message saying that a library is not registered when I try to run the exportVisio-task.

Ausnahme beim Festlegen von "Visible": "Das COM-Objekt des Typs "Microsoft.Office.Interop.Visio.ApplicationClass" kann nicht in den Schnittstellentyp
"Microsoft.Office.Interop.Visio.IVApplication" umgewandelt werden. Dieser Vorgang konnte nicht durchgeführt werden, da der QueryInterface-Aufruf an die
COM-Komponente für die Schnittstelle mit der IID "{000D0700-0000-0000-C000-000000000046}" aufgrund des folgenden Fehlers nicht durchgeführt werden konnte:
Bibliothek nicht registriert. (Ausnahme von HRESULT: 0x8002801D (TYPE_E_LIBNOTREGISTERED))."
In ...\scripts\VisioPageToPngConverter.ps1:48 Zeichen:1
+ $VisioApp.Visible = $false
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], SetValueInvocationException
    + FullyQualifiedErrorId : ExceptionWhenSetting

A: When Visio is installed, it registers itself as com library. It seems that this registration can break. You can fix this by visiting the windows system settings → install or uninstall a program, select visio, select change and then repair.

4.3. PlantUML


5. Further Reading

This chapter lists some additional references to interesting resources.

5.2. Books

links to Amazon are affiliate links

5.2.1. English Books

5.3. Past and upcoming Talks

5.3.1. Dokumentation am (Riesen-)Beispiel – arc42, AsciiDoc und Co. in Aktion

Anhand eines großen Systems zeigen Gernot und Ralf, wie Sie mit ziemlich wenig Aufwand angemessene und vernünftige Dokumentation für unterschiedliche Stakeholder produzieren – sodass Entwicklungsteams dabei auch noch Spaß haben.

Unser Rezept: AsciiDoc mit arc42 mischen, Automatisierung mit Gradle und Maven hinzufügen und mit Diagramm- oder Modellierungstools Ihrer Wahl kombinieren. Schon sind schicke HTML- und reviewfähige PDF-Dokumente fertig. Auf Wunsch gibts DOCX und Confluence als Zugabe.

Wir zeigen, wie Sie Doku genau wie Quellcode verwalten können, stakeholderspezifische Dokumente erzeugen und Diagramme automatisiert integrieren können. Einige Teile dieser Doku können Sie sogar automatisiert testen.

Zwischendurch bekommen Sie zahlreiche Tipps, wie und wo Sie systematisch den Aufwand für Dokumentation reduzieren können und trotzdem lesbare, verständliche und praxistaugliche Ergebnisse produzieren.


5.3.2. Gesunde Dokumentation mit Asciidoctor

Autoren möchten Inhalte effizient dokumentieren und vorhandene Inhalte wiederverwenden. Ein Leser möchte das Dokument in einem ansprechenden Layout präsentiert bekommen.

Das textbasierte Asciidoc-Format bietet für einen Entwickler oder technischen Redakteur alle notwendigen Auszeichungselemente, um auch anspruchsvolle Dokumente zu schreiben. So werden unter anderem Tabellen, Fußnoten und annotierte Quellcodes unterstützt. Gleichzeitig ist es ähnlich leichtgewichtig wie z.B. das Markdown Format. Für die Leser wird HTML, PDF oder EPUB generiert.

Da Asciidoc wie Programmcode eingecheckt wird und Merge-Operationen einfach möglich sind, können Programmcode und Dokumentation zusammen versioniert und auf einem einheitlichen Stand gehalten werden.

Der Vortrag gibt eine kurze Einführung in Asciidoc und die dazugehörigen Werkzeuge.


6. Acknowledgements and Contributors

This project is an open source project which is based on community efforts.

Many people are involved in the underlying technologies like AsciiDoc, Asciidoctor, Gradle, arc42 etc. This project depends and builds on them.

But it even more depends on the direct contributions made through giving feedback, creating issues, answering questions or sending pull requests.

Here is an incomplete and unordered list of contributors:

(please update your entry to match your preferences! :-)