Skip to main content
Skip table of contents

iText DITO Docker API (iText_DITO 2.4)

The iText DITO SDK for Docker image allows you to use iText DITO conveniently from various environments. It enables you to produce PDFs from your output templates.

How to use the iText DITO SDK for Docker image

First, you'll need to pull the image. Note: you need to specify the version explicitly when using the pull command, e.g.:

docker pull itext/dito-sdk:{version}

For the full list of available versions go to Tags tab on the DockerHub repository page.

Start the SDK

To start the docker image run the following command:

docker run -it --name dito-sdk-instance
         -v {path to config directory}:/etc/opt/dito/shared
         -v {path to work directory}:/var/opt/dito
         -p {port on host}:8080
         itext/dito-sdk:{version}

config directory - the directory that contains iText DITO license and application configuration file.

iText DITO License file

License file provided by iText. Must be present in config directory. Expected default name is license.json, but it can be overridden by setting the DITO_LICENSE_FILE environment variable during run, for example:

docker run {other options} -e DITO_LICENSE_FILE=dito.json itext/dito-sdk:{version}

Note: If you're using Azure, you may get an error message that the license key cannot be found. To resolve this, you will need to enable volume mounting using the WEBSITES_ENABLE_APP_SERVICE_STORAGE=true environment variable.

Application configuration file

YML file with application configuration, in form of:

logAppenders:
- type: file
- type: stdout
pdfProducerReportCache:
  maxSize: 1000
  expireAfterWrite: P1D
templateDeploymentFeature:
  enabled: true
  timeout:
    eachOperationRetryMillis: 1000
    eachOperationRetryCount: 5
    allBlockWaitMillis: 5000
    fetchAllDescriptorsWaitMillis: 15000
pdfProductionTemplateHttpFetchingMode:
  cacheTTL: PT10M

logAppenders - list of log appenders that is used to configure where to output the log messages. Log appender is an object with a defined type. Available log appender types:

  • stdout - log messages will be sent to stdout;

  • file - log messages will be placed in files in a log directory.

You can specify the log appenders one at a time (stdout or file) or several at once (stdout and file simultaneously). If the list is null or empty, default appender stdout will be used.

pdfProducerReportCache - configures how many and how long cached production reports are stored in memory.

  • maxSize - integer (64 bit) specifying how many reports can be stored simultaneously.

  • expireAfterWrite - string in ISO-8601 subset (Days, Hours, Minutes, Seconds, e.g. P1DT2H3M4S) specifying for how long the reports can be stored

templateDeploymentFeature - configures the template deployment feature with registering/unregistering and template deployment/undeployment.

  • enabled - enables the feature, is true by default.
  • timeout - configures template operation wait timeouts.
    • eachOperationRetryMillis - configures how much time we wait before next retry to do such operations as deploy, undeploy and PDF produce, 1000 ms by default
    • eachOperationRetryCount - configures how many times we do retry before timeout for such operations as deploy, undeploy and PDF produce, 5 by default
    • allBlockWaitMillis - configures how much time we wait before timeout for such operations as register and unregister, 5000 ms by default
    • fetchAllDescriptorsWaitMillis - configures how much time we wait before timeout for fetch all descriptors operation, 15000 ms by default

pdfProductionTemplateHttpFetchingMode - configuration section for HTTP PDF production mode. Section is being ignored if PDF_PRODUCTION_TEMPLATE_SOURCE environment variable is not set to http

  • cacheTTL - sets a period of time in which 'client' SDK (the one that fetches template package for PDF production) stores a fetched template package and does not make HTTP request.

File can be optionally present in config directory. If not present the default one will be used. Expected default name for YML file is user.config.yml, but it can be overridden by setting the DITO_USER_CONFIG environment variable during run, for example:

docker run {other options} -e DITO_USER_CONFIG=config.yml itext/dito-sdk:{version}

Stop the SDK

To stop SDK service gracefully, run the standard command:

docker kill --signal=SIGTERM {containerId}

You can also use another standard command:

docker stop -t 30 {containerId}

This command sends the SIGTERM signal to the SDK process inside the Docker container, then waits for the specified time in seconds (10 by default). If the SDK doesn't manage to finish its shutdown within that time period, SIGKILL is sent to the kernel and the SDK process is killed.

Configuring Docker SDK for optimal performance

A Java application is run under the hood of the Docker SDK which allows you to fine-tune the JVM for optimal performance in your circumstances.

For example, to extend the default memory allocation to JVM you can tune -Xmx parameter.

To configure the Java Virtual Machine options you can pass the JAVA_TOOL_OPTIONS environment variable to the Docker container.

Example: docker run {other options} -e JAVA_TOOL_OPTIONS="-Xmx5G" itext/dito-sdk:{version}

If you have configured the environment variable and correctly passed it to the iText DITO Docker SDK container you should see the following message in the logs:

Picked up JAVA_TOOL_OPTIONS: -Xmx5G

DITO template package restrictions

DITO template package has the following restrictions by default:

  1. The maximum decompressed size of a template package is 1GB
  2. The number of archived files cannot exceed 10000
  3. Compression ratio of each file cannot exceed 200

This is to prevent some vulnerabilities (e.g., zip bomb). To change these limits you need to set the following environment variables accordingly:

  1. DITO_TEMPLATE_PACKAGE_MAX_DECOMPRESSED_TOTAL_SIZE (size in bytes)
  2. DITO_TEMPLATE_PACKAGE_MAX_ENTRY_COUNT
  3. DITO_TEMPLATE_PACKAGE_MAX_ENTRY_COMPRESSION_RATIO

For example to increase maximum decompressed size to 2GB, maximum number of archived files to 100000, maximum compression ratio to 10000, the following command can be used:

docker run {other options} 
         -e DITO_TEMPLATE_PACKAGE_MAX_DECOMPRESSED_TOTAL_SIZE=2147483648 
         -e DITO_TEMPLATE_PACKAGE_MAX_ENTRY_COUNT=100000 
         -e DITO_TEMPLATE_PACKAGE_MAX_ENTRY_COMPRESSION_RATIO=10000 
         itext/dito-sdk:{version}

Web API reference

To get information about Web API, you can refer to the following URL:

/api/docs/index.html

This page contains information about available resources and descriptors. Plus, you can also execute requests directly from there.

If you would like to access Web API documentation without starting the application, then you can find this information by following the links below:

iText DITO Docker REST API

iText DITO Docker REST API SwaggerHub

Examples

Running SDK

On Windows
docker run -it --name dito-sdk-example
         -v D:\docker\dito\config:/etc/opt/dito/shared
         -v D:\docker\dito\work:/var/opt/dito
         -p 42:8080
         itext/dito-sdk:1.1.8

D:\docker\dito\config - a folder with license.json file.

D:\docker\dito\work - a working folder where all project files and data samples should be placed.

On Linux
docker run -it --name dito-sdk-example
         -v /home/docker/dito/config:/etc/opt/dito/shared
         -v /home/docker/dito/work:/var/opt/dito
         -p 42:8080
         itext/dito-sdk:1.1.8

/home/docker/dito/config - a folder with license.json file.

/home/docker/dito/work - a working folder where all project files and data samples should be placed.

Horizontal scaling

When setting up additional SDK instances to work with same storage in order for all requests to be properly synchronise you'll have to do it in quite particular manner. One instance should be set up in primary mode that can perform all operations and hosts templates for other secondary instances.

Instance set up

To enable template fetching for primary instance and configure secondary instances to communicate with primary one you'll need to set up next environment variables:

  • PDF_PRODUCTION_TEMPLATE_SOURCE must be set to http
  • PDF_PRODUCTION_TEMPLATE_HTTP_SOURCE_URL must be set to <primary-instance-base-url>/api/deployments/{alias}/all
  • You can also optionally set up PDF_PRODUCTION_TEMPLATE_HTTP_SOURCE_DEFAULT_CACHE_TTL to some ISO-8601 period string if you want to customize the time the template will be cached on secondary instance after fetching. Increasing this parameter will increase the time the updates are propagated from primary instance, but it will decrease amount of HTTP requests between instances.

You would also need to set next environment variables only for secondary instances:

  • RESTRICT_DEPLOYMENT_FEATURE_TO_PDF_PRODUCTION_ONLY_MODE must be set to true.
  • TEMPLATE_DESCRIPTOR_HTTP_SOURCE_URL must be set to <primary-instance-base-url>/api/deployments/{alias}
  • ALL_TEMPLATE_DESCRIPTORS_HTTP_SOURCE_URL must be set to <primary-instance-base-url>/api/deployments

This will enable proper work of endpoints for getting deployment descriptors and make secondary instance properly indicate its status.

Note that deploy/undeploy and register/unregister endpoints must be called for primary instance only.

Minimal compose file example

Please note that you would need to set up some port mappings for external access

version: "3"

services:
   dito-sdk-primary:
      image: itext/dito-sdk:<version>
      container_name: dito-sdk-primary
      volumes:
         - <config-path>:/etc/opt/dito/shared
      environment:
         - PDF_PRODUCTION_TEMPLATE_SOURCE=http
         - PDF_PRODUCTION_TEMPLATE_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments/{alias}/all
   dito-sdk-secondary-one:
      image: itext/dito-sdk:<version>
      container_name: dito-sdk-secondary-one
      volumes:
         - <config-path>:/etc/opt/dito/shared
      environment:
         - RESTRICT_DEPLOYMENT_FEATURE_TO_PDF_PRODUCTION_ONLY_MODE=true
         - PDF_PRODUCTION_TEMPLATE_SOURCE=http
         - PDF_PRODUCTION_TEMPLATE_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments/{alias}/all
         - TEMPLATE_DESCRIPTOR_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments/{alias}
         - ALL_TEMPLATE_DESCRIPTORS_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments
   dito-sdk-secondary-two:
      image: itext/dito-sdk:<version>
      container_name: dito-sdk-secondary-two
      volumes:
         - <config-path>:/etc/opt/dito/shared
      environment:
         - RESTRICT_DEPLOYMENT_FEATURE_TO_PDF_PRODUCTION_ONLY_MODE=true
         - PDF_PRODUCTION_TEMPLATE_SOURCE=http
         - PDF_PRODUCTION_TEMPLATE_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments/{alias}/all
         - TEMPLATE_DESCRIPTOR_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments/{alias}
         - ALL_TEMPLATE_DESCRIPTORS_HTTP_SOURCE_URL=http://dito-sdk-primary:8080/api/deployments

Extending from DITO SDK Docker image

In some cases you may want to extend from DITO SDK Docker image. Here are some inner detail on base image that may help you do it. To extend from DITO SDK Docker image start your Dockerfile with FROM itext/dito-sdk:{version} e.g.

FROM itext/dito-sdk:1.4.2
What is needed to run base image

The main thing you need is license file in the config directory and DITO projects in the working directory.

License file

License file is expected to be stored at /etc/opt/dito/shared/$DITO_LICENSE_FILE where DITO_LICENSE_FILE is a predefined environment variable equal by default to license.json.

So if you want to avoid providing volume mapping for folder with license file, it is possible to do so by extending base image and adding

COPY path/on/local/machine/to/license/file.json /etc/opt/dito/shared/$DITO_LICENSE_FILE

NOTE: Be careful with sharing images that have license files in them.

Optional user config

Optional user configuration is expected at same folder as license config file at /etc/opt/dito/shared/$DITO_USER_CONFIG, where DITO_USER_CONFIG is a predefined environment variable equal by default to user.config.yml. So it can be provided similarly to license file

COPY path/on/local/machine/to/config/user.yml /etc/opt/dito/shared/$DITO_USER_CONFIG
Work directory

In order to produce PDF you'll need to have some .dito project files in working directory. There is a special predefined environment variable DITO_WORK_DIR equal by default to /var/opt/dito

So if you want to avoid providing volume mapping for work directory, you can copy all required .dito projects in your Dockerfile:

COPY path/on/local/machine/to/project1.dito path/on/local/machine/to/project2.dito $DITO_WORK_DIR/

NOTE: If at some point before running base entrypoint working directory was changed it should be restored to DITO work directory. This can be done in Dokerfile:

WORKDIR $DITO_WORK_DIR
Logs

If the file log appender is used then the logs are stored at /var/log/dito on the container and are kept for 30 days. To ease access to such logs you can provide log directory mapping after work directory mapping during docker run as follows: -v {path to log directory}:/var/log/dito If the log directory mapping is not provided the log files will be stored only on container and still can be accessed with e.g. docker cp command.

In the case when stdout log appender is used, all log messages will be sent directly to stdout. Any messages that the container sends to stdout can be viewed using the following command: docker logs {container_name_or_ID}

Running image

To prepare the dito application to work correctly, the script that is stored at /opt/dito/startup-prepare.main.kts is used. After that, the following command is called java -jar /opt/dito/$DITO_JAR_NAME server /etc/opt/dito/config.yml where DITO_JAR_NAME=dito-sdk-docker.jar. There is no need to write

ENTRYPOINT ["bash", "-c", "source $HOME/.sdkman/bin/sdkman-init.sh && kotlin /opt/dito/startup-prepare.main.kts && java -jar /opt/dito/$DITO_JAR_NAME server /etc/opt/dito/config.yml"]

Since the base one will be taken instead, but if you want to do something before running DITO application in container you may put it in your own script and call all commands in the entry point above in the end.

Full example of Dockerfile for image that doesn't require any volume mappings
FROM itext/dito-sdk:1.4.2
COPY license.json /etc/opt/dito/shared/$DITO_LICENSE_FILE
COPY user.config.yml /etc/opt/dito/shared/$DITO_USER_CONFIG
COPY project1.dito project2.dito $DITO_WORK_DIR/

Note: When running such image you would still need to map 8080 container port.

Permissions troubleshooting in 2.2.0 version and greater

In iText DITO version 2.2.0 we've changed our containers to be run as non-root user. Because of this there is a potential issue with access to folders and files which were created by containers with previous versions and have mappings to the host machine.
For example, if you had a work directory containing templates created in dito-sdk with a version less than 2.2.0, and then upgraded to 2.2.0 which now run as non-root user, the work directory folder can be inaccessible because of lack of permissions.

There are a couple of possible scenarios where this can apply.

Bind mount

In cases when you run your dito-sdk container using a bind mount to required folders (as we describe in Start the SDK sections), you need to manually give the following permissions to folders (and all its subfolders and files inside) on the host machine:

  • Read permissions
    • config folder (by default this is /etc/opt/dito/shared inside container)
  • Read & Write permissions
    • log folder, in case file logging is enabled (mapped to /var/log/dito inside container)
    • work directory (by default is /var/opt/dito inside container, but can be overriden by DIT0_WORK_DIR env variable )
Permission granting

Permissions can be granted in several ways

  1. Give the corresponding permission (w or r) for others (e.g. chmod -R o=r /pathToHostCfgFolder)
  2. As we use user and group inside container with static uid & gid equal to 2000, it's possible to give access via chown (e.g. chown -R 2000:2000 /pathToHostCfgFolder)
  3. (Can be used for Windows OS)
    1. Run any container you like (for example the busybox image can be used), using bind mounts to the same host folders as you used when running dito-sdk to any folder in the container (e.g. docker run -it -v /var/myWorkDir:/etc/any busybox)
    2. Go to the container shell (docker exec -it <container name> /bin/bash, try sh instead of /bin/bash if it doesn't work),
    3. Give access to folders inside the container to which you've bound a folder from host with a help of one approach from above (e.g. chown -R 2000:2000 /etc/any),
    4. Don't forget to stop & remove your running container
      All this can be executed as a one-liner:
      docker run -v <path to folder on host>:/etc/any -d -it --name upgradeHelpContainer busybox && docker exec -it upgradeHelpContainer sh -c "chown -R 2000:2000 /etc/any" && docker rm --force upgradeHelpContainer

Don't forget to give permissions to other extra bound folders or files if any of such used

Named volumes

In cases where you run dito-sdk with named volumes, there are 2 approaches which can be used to give permissions:

Note: this is required only in cases when upgrading from a pre-2.2.0 is happening with already existing volumes, in other cases there should be no problem creating volumes from scratch.

  1. (Manual, system dependent, not applicable to Windows OS)
    1. docker volume inspect <volume name> to find where a volume is on host machine,
    2. Give permission to a folder on host the machine using one of the approaches from above in Bind mount. Permission granting
  2. (System independent)
    1. Run any container you like (for example the busybox image can be used), using the same volumes as you used when running dito-sdk to any folder in the container (e.g. docker run -it -v myTestVolume:/etc/any busybox),
    2. Go to the container shell (docker exec -it <container name> /bin/bash, try sh instead of /bin/bash if it doesn't work),
    3. Give access to folders inside the container to which you've bond volumes (e.g. chown -R 2000:2000 /etc/any),
    4. Don't forget to stop & remove your running container.
      All this can be executed as a one-liner:
      docker run -v <volume name>:/etc/any -d -it --name upgradeHelpContainer busybox && docker exec -it upgradeHelpContainer sh -c "chown -R 2000:2000 /etc/any" && docker rm --force upgradeHelpContainer
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.