Deploy the TinyMCE Export to PDF service server-side component using Docker (individually licensed)
Overview
The On-Premises version of the Export to PDF Converter is an application that can be installed and run on the customer’s in-house servers and computing infrastructure, including a private cloud. It contains all the features of the Export to PDF Converter available as SaaS.
A valid license key is required in order to install Export to PDF Converter On-Premises. Contact us for a trial license key.
The only requirement to run Export to PDF On-Premises is a container runtime or orchestration tool e.g. Docker, Kubernetes, Podman.
Requirements
To run Export to PDF On-Premises a Docker environment is required. Alternatively, use CaaS available from your cloud provider, like AWS ECS, Google GKE, or Azure ACS.
There are many factors affecting Export to PDF On-Premises performance. The most influential are the size of exported content, the size of images, and the number of concurrent requests. Also, because your application can prioritize fast response times, or it should handle high load, it is impossible to provide one recommended server specification, that will fit all use cases.
Assuming response time below 10 seconds, one server (2CPU 2GB RAM) with 1 docker container can handle:
-
up to 40 concurrent requests with an average content of 1 A4 page (~1k characters and 1 image)
-
up to 25 concurrent requests with an average content of 5 A4 pages (~7,5k characters and 5 images)
-
up to 10 concurrent requests with an average content of 20 A4 pages (~30k characters and 20 images)
The above concurrent requests numbers are not a hard limit of Export to PDF On-Premises instance. It can handle more concurrent requests, but the response time will be longer. |
High availability
One docker container with Export to PDF On-Premises benefits from additional CPUs on the machine. To scale your app on a single machine, increase the number of CPUs, however, Tiny recommends scaling on at least three hosts to ensure the reliability of the system.
A load balancer, like HAProxy or NGINX (see the load balancer configuration examples in the SSL communication guide), is required for scaling on several machines. Of course, it is possible to use any cloud provider for scaling, like Amazon ECS, Azure Container Instances, or Kubernetes.
Contact us if you have any questions about server resources needed for your use case of Export to PDF On-Premises.
Installation
A valid license key is needed in order to install Export to PDF On-Premises. Contact us for a trial license key. |
Supported technologies
The application is provided as a docker image by default.
It can be run with any Open Container runtime tool e.g. Kubernetes, OpenShift, Podman, Docker and many others.
Refer to the Requirements guide for more information about the hardware and software requirements to run the Export to PDF On-Premises.
Setting up the application using a Docker container
-
The username and password credentials supplied by Tiny are utilized for logging into the Docker registry and retrieving the Docker image.
-
Containerize the application using
docker
ordocker-compose
. -
Use a demo page to verify if the application works properly.
Containerize example using docker
Login to Docker registry:
docker login -u [username] -p [password] registry.containers.tiny.cloud
Launch the Docker container:
docker run --init -p 8080:8080 -e LICENSE_KEY=[your_license_key] registry.containers.tiny.cloud/pdf-converter-tiny:[version]
If using authorization provide the SECRET_KEY:
docker run --init -p 8080:8080 -e LICENSE_KEY=[your_license_key] -e SECRET_KEY=[your_secret_key] registry.containers.tiny.cloud/pdf-converter-tiny:[version]
Read more about using authorization in the authorization section.
Containerize example using docker-compose
-
Create the docker-compose.yml file:
version: "3.8" services: pdf-converter-tiny: image: registry.containers.tiny.cloud/pdf-converter-tiny:[version] ports: - "8080:8080" restart: always init: true environment: LICENSE_KEY: "license_key" # Secret Key is optional SECRET_KEY: "secret_key" # Custom request origin is optional CUSTOM_REQUEST_ORIGIN: "https://your_custom_origin"
For details on
SECRET_KEY
usage check the authorization section. -
Run:
docker-compose up
|
Windows fonts support
If using Windows fonts like Calibri, Verdana, etc. in PDF files, use the pdf-converter-windows-tiny
Docker image and run it on a Windows operating system.
See Fonts section for more details.
Next steps
Use the http://localhost:8080/v1/convert endpoint to export PDF files. Check out the authorization section to learn more about tokens and token endpoints.
Use the demo page available on http://localhost:8080/demo to generate an example PDF file.
Refer to the Export to PDF REST API documentation on http://localhost:8080/docs for more details.
Fonts
During document writing, the possibility of using many different fonts can be very important to users.
Using the appropriate font can change the appearance of the document and emphasize its style.
Export to PDF Converter allows Web Fonts to be used, which provided the integrator with the ability to use standard operating system fonts or use custom fonts without the need to import them using CSS.
Below is a list of the basic fonts included in the image:
OpenSans-Bold.ttf
OpenSans-BoldItalic.ttf
OpenSans-ExtraBold.ttf
OpenSans-ExtraBoldItalic.ttf
OpenSans-Italic.ttf
OpenSans-Light.ttf
OpenSans-LightItalic.ttf
OpenSans-Regular.ttf
OpenSans-Semibold.ttf
OpenSans-SemiboldItalic.ttf
However, additional fonts can be added to Export to PDF Converter in two ways:
-
Use Unix-like PDF-Converter image
registry.containers.tiny.cloud/pdf-converter-tiny
and mount fonts directory to it.-
See Add custom fonts to PDF Converter section.
-
-
Use Windows PDF-Converter image
registry.containers.tiny.cloud/pdf-converter-tiny
and mount to it fonts directory from the Windows operating system on which the container is running.-
See Use Windows fonts in PDF Converter section.
-
The fonts inside the mounted volume will be installed on the docker image operating system. Only the .ttf and .otf font formats are supported. If other font formats are used, these will need to be converted to the supported format prior or use fonts such as Web Fonts.
|
Ensure that the converted fonts can be installed and used on your local machine first, before installing them on the docker container. |
Add custom fonts to PDF Converter
If custom fonts are being used in PDF files, use the pdf-converter-tiny
Docker image and mount the directory with the custom fonts for the PDF Converter application running on a machine with a Unix-like system (this includes Docker on Windows with a WSL backend).
The registry.containers.tiny.cloud/pdf-converter-tiny
Docker image need to be run on a Unix-like operating system and mount the ~/your_fonts_dir:/usr/share/fonts/your_fonts_dir
volume.
Launch the Docker container on Unix-like operating system example:
docker run --init -v ~/your_fonts_dir:/usr/share/fonts/your_fonts_dir -p 8080:8080 -e LICENSE_KEY=[your_license_key] registry.containers.tiny.cloud/pdf-converter-tiny:[version]
Use Windows fonts in PDF Converter
If using Windows fonts like Arial, Verdana, etc. in PDF files, use pdf-converter-windows-tiny
Docker image that allows you to run the application on a machine with Windows operating system and mount fonts from the system.
You just need to run registry.containers.tiny.cloud/pdf-converter-tiny
Docker image on Windows operating system and mount C:\Windows\Fonts:C:\Windows\Fonts
volume.
Launch the Docker container on Windows operating system example:
docker run -v C:\Windows\Fonts:C:\Windows\Fonts -p 8080:8080 --env LICENSE_KEY=[your_license_key] registry.containers.tiny.cloud/pdf-converter-windows-tiny:[version]
Authorization
To enable authorization, set the SECRET_KEY
environment variable during the installation.
If the SECRET_KEY
variable is set, then all requests must have a header with a JWT (JSON Web Token) signed with this key. The token should be passed as a value of the Authorization
header for each request sent to the Export to PDF REST API.
If the SECRET_KEY is not setup during the installation, then Export to PDF On-Premises will not require any headers with tokens when sending requests to the Export to PDF REST API. However, this it is not recommend to skip the authorization when running Export to PDF On-Premises in a public network.
|
Generating the token
Tiny recommends using the libraries listed on jwt.io to generate the token. The token is considered valid, when:
-
it is signed with the same
SECRET_KEY
as passed to the Export to PDF On-Premises instance, -
it was created within the last 24 hours,
-
it is not issued in the future (e.i. the iat timestamp cannot be newer than the current time),
-
it has not expired yet.
If the specific use case involves sending requests from a backend server, then JWT tokens can be generated locally, as shown in the below request example.
In the case of editor plugins or other frontend usages, a token endpoint should be created, that returns a valid JWT token for authorized users.
const express = require( 'express' );
const jwt = require( 'jsonwebtoken' );
const SECRET_KEY = 'secret_key';
const app = express();
app.use( ( req, res, next ) => {
res.setHeader( 'Access-Control-Allow-Origin', '*' );
res.setHeader( 'Access-Control-Allow-Methods', 'GET' );
next();
});
app.get( '/', ( req, res ) => {
const result = jwt.sign( {}, SECRET_KEY, { algorithm: 'HS256' } );
res.send( result );
});
app.listen( 8080, () => console.log( 'Listening on port 8080' ) );
Using editor plugins
Plugins for TinyMCE will automatically request the token from the given tokenUrl
variable and set the Authorization
header when making an export request.
Refer to the Export to PDF plugin documentation for details on adding the Export to PDF feature to the editor. |
Request example with an Authorization header
The following example presents a request that generates valid JWT token and sets it as Authorization
header:
const fs = require( 'fs' );
const jwt = require( 'jsonwebtoken' );
const axios = require( 'axios' );
const SECRET_KEY = 'secret';
const token = jwt.sign( {}, SECRET_KEY, { algorithm: 'HS256' } );
const data = {
html: "<p>I am a teapot</p>",
css: "p { color: red; }",
};
const config = {
headers: {
'Authorization': token
},
responseType: 'arraybuffer',
};
axios.post( 'http://localhost:8080/v1/convert', data, config )
.then( response => {
fs.writeFileSync('./file.pdf', response.data, 'binary');
}).catch( error => {
console.log( error );
});
SECRET_KEY
it’s the key which has been passed to the Export to PDF On-Premises instance
Please refer to the Export to PDF REST API documentation to start using the service.
If API clients like Postman or Insomnia are used, then set the JWT token as an Authorization header in the Headers tab. Do not use the built-in token authorization as this will generate invalid header with a Bearer prefix added to the token.
|
API Usage
The Export to PDF On-Premises converter provides the ability to convert an HTML document to a PDF file via Restful API.
The API is available on http://localhost:[port]
(by default the port
is 8080
).
The REST API documentation is available at http://localhost:[port]/docs .
Alternatively, refer to the specifications in https://exportpdf.converter.tiny.cloud/docs.
|
If the authorization for the API is enabled, provided an authorization token. More instructions can be found in the authorization section.
Using additional HTTP headers
If fetching some resources (e.g. images) used in a generated PDF requires passing an additional authorization factor in the form of additional HTTP headers:
-
It can be defined on the application startup by setting
EXTRA_HTTP_HEADERS
environmental variable where the value is a stringified JSON object with required headers. -
It can be defined in a request sent to the PDF Converter API in
options
:
const data = {
html: '<p>I am a teapot</p><img src="https://secured-example-website.com/image.jpg">',
css: 'p { color: red; }',
options: {
extra_http_headers: {
authorization: 'Bearer <replace_with_your_auth_key>'
}
}
};
axios.post( 'exportpdf_service_url placeholder', data, config )
.then( response => {
fs.writeFileSync('./file.pdf', response.data, 'binary')
}).catch( error => {
console.log( error );
});
Headers defined in the application config and from the request are merged. If the same header is defined in both places, a header value from PDF options is prioritized over the value from the application config. |
SSL Communication
Its possible to communicate with Export to PDF On-Premises using secure connections. To achieve this, the load balancer like NGINX
or HAProxy
needs to be setup with your SSL certificate.
HAProxy
and NGINX
configuration examples below.
HAProxy example
Here is a basic HAProxy
configuration:
global
daemon
maxconn 256
tune.ssl.default-dh-param 2048
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
bind *:443 ssl crt /etc/ssl/your_certificate.pem
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
redirect scheme https if !{ ssl_fc }
default_backend servers
backend servers
server server1 127.0.0.1:8000 maxconn 32
NGINX example
Here is a basic NGINX
configuration:
events {
worker_connections 1024;
}
http {
server {
server_name your.domain.name;
listen 443;
ssl on;
ssl_certificate /etc/ssl/your_cert.crt;
ssl_certificate_key /etc/ssl/your_cert_key.key;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
}
}
}
Logs
The logs from Export to PDF On-Premises are written to stdout
and stderr
. Most of them are formatted in JSON. They can be used for monitoring or debugging purposes. In production environments, It is recommend storing the logs to files or using a distributed logging system (like ELK or CloudWatch).
Monitoring Export to PDF with logs
To get more insight into how the Export to PDF On-Premises is performing, logs can be used for monitoring. To enable these, add the ENABLE_METRIC_LOGS=true
environment variable.
Log structure
The log structure contains the following information:
-
handler
: A unified identifier of action. Use this field to identify calls. -
traceId
: A unique RPC call ID. -
tags
: A semicolon-separated list of tags. Use this field to filter metrics logs. -
data
: An object containing additional information. It might vary between different transports. -
data.duration
: The request duration in milliseconds. -
data.transport
: The type of the request transport. It could be http or ws (websocket). -
data.status
: The request status. It can be equal to success, fail, warning. -
data.statusCode
: The response status in HTTP status code standard.
Additionally, for the HTTP transport, the following information is included:
-
data.url
: The URL path. -
data.method
: The request method.
In case of an error, data.status
will be equal to failed and data.message
will contain the error message.
An example log for HTTP transport:
{
"level": 30,
"time": "2021-03-09T11:15:09.154Z",
"msg": "Request summary",
"handler": "convertHtmlToPdf",
"traceId": "85f13d92-57df-4b3b-98bb-0ca41a5ae601",
"data": {
"duration": 2470,
"transport": "http",
"statusCode": 200,
"status": "success",
"url": "/v1/convert",
"method": "POST"
},
"tags": "metrics"
}
Docker
The docker has built-in logging mechanisms that capture logs from the output of the containers. The default logging driver writes the logs to files.
When using this driver, use the docker logs
command to show logs from a container. The -f
flag can be added to view logs in real time. Refer to the official Docker documentation for more information about the logs command.
When a container is running for a long period of time, the logs can take up a lot of space. To avoid this problem, make sure that the log rotation is enabled. This can be set with the max-size option.
|
Distributed logging
If running more than one instance of Export to PDF On-Premises, It is recommend using a distributed logging system. It allows for viewing and analyzing logs from all instances in one place.
AWS CloudWatch and other cloud solutions
If running Export to PDF On-Premises in the cloud, the simplest and recommended way is to use a service that is available at the selected provider.
Here are some of the available services:
-
AWS: CloudWatch
-
Google Cloud: Cloud Logging
-
Azure: Azure Monitor
To use CloudWatch with AWS ECS, a log group must be created before, and the log driver must be changed to awslogs
. When the log driver is configured properly, logs will be streamed directly to CloudWatch.
The logConfiguration
may look similar to this:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-region": "us-west-2",
"awslogs-group": "tinysource",
"awslogs-stream-prefix": "tiny-pdf-converter-logs"
}
}
Refer to the Using the awslogs Log Driver article for more information.
On-Premises solutions
If using a specific infrastructure such as your own or for some reason cannot use the service offered by a provider, some on-premises distributed logging system can be used.
There are a lot of solutions available, including:
This is a stack built on top of Elasticsearch, Logstash and Kibana. In this configuration, Elasticsearch stores logs, Filebeat reads logs from Docker and sends them to Elasticsearch, and Kibana is used to view them. Logstash is not necessary because logs are already structured.
It uses a dedicated Docker log driver to send the logs. It has a built-in frontend, but can also be integrated with Elasticsearch and Kibana for better filtering.
It uses a dedicated Docker log driver to send the logs. It has a built-in frontend and needs Elasticsearch to store the logs as well as a MongoDB database to store the configuration.
Example configuration
The example configuration uses Fluentd, Elasticsearch, and Kibana to capture logs from Docker.
Before running Export to PDF On-Premises, prepare the logging services. For the purposes of this example, Docker Compose is used. Create the fluentd, elasticsearch and kibana services inside the docker-compose.yml file:
version: '3.7'
services:
fluentd:
build: ./fluentd
volumes:
- ./fluentd/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.5
expose:
- 9200
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:6.8.5
environment:
ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"
ports:
- "5601:5601"
To integrate Fluentd with Elasticsearch, first install fluent-plugin-elasticsearch
in the Fluentd image. To do this, create a fluentd/Dockerfile
with the following content:
FROM fluent/fluentd:v1.10-1
USER root
RUN apk add --no-cache --update build-base ruby-dev \
&& gem install fluent-plugin-elasticsearch \
&& gem sources --clear-all
Next, configure the input server and connection to Elasticsearch in the fluentd/fluent.conf
file:
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>
<store>
@type stdout
</store>
</match>
The services are now ready to run:
docker-compose up --build
When the services are ready, start Export to PDF On-Premises.
docker run --init -p 8080:8080 \
--log-driver=fluentd \
--log-opt fluentd-address=[Fluentd address]:24224 \
[Your config here] \
registry.containers.tiny.cloud/pdf-converter-tiny:[version]
-
Open Kibana in your browser.
-
It is available at http://localhost:5601/.
-
-
During the first run, you may be asked about creating an index.
-
Use the
fluentd-*
pattern and press the “Create” button. -
After this step, the logs should appear in the “Discover” tab.