1. Development environment setup

You can adopt one of theese configurations:

  • Eclipse SDK and local Cobalt server;

  • Cobalt Docker image and the IDE you prefer.

Other optional requirements are:

  • a local or remote Methode Editorial instance;

  • PostgreSQL 12 or above

  • Elasticsearch 6.8 or above

  • Swing version with the same year and month of Cobalt release version

1.1. Installing and configuring Maven

You will need Maven, a dependency management and build automation tool for Java projects.

Download maven from https://maven.apache.org/ and follow installalations instruction at https://maven.apache.org/install.html.

It may be usefull to append the full path to the bin folder of the unpacked maven distribution to your PATH environment variable.

Find the .m2 folder in your user directory (e.g. C:\Users\name.surname\.m2) and create a settings.xml file with the following content, replacing username and password with your credentials:

<?xml version="1.0" encoding="UTF-8"?>
<settings>
    <activeProfiles>
        <activeProfile>eidosmedia</activeProfile>
    </activeProfiles>
    <servers>
        <server>
            <id>archetype</id>
            <username>username</username>
            <password>password</password>
        </server>
    </servers>
    <profiles>
        <profile>
            <id>eidosmedia</id>
            <repositories>
                <repository>
                    <id>archetype</id>
                    <name>artifactory</name>
                    <url>https://artifactory.eidosmedia.sh/artifactory/maven-public/</url>
                    <releases>
                        <enabled>true</enabled>
                        <checksumPolicy>fail</checksumPolicy>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                        <checksumPolicy>warn</checksumPolicy>
                    </snapshots>
                </repository>
            </repositories>
            <pluginRepositories>
                <pluginRepository>
                    <id>archetype</id>
                    <name>artifactory</name>
                    <url>https://artifactory.eidosmedia.sh/artifactory/maven-public/</url>
                </pluginRepository>
            </pluginRepositories>
        </profile>
    </profiles>
</settings>

To avoid overriding the default local repository in the .m2 directory, an optional localRepository node could be added:

<settings>
...
    <localRepository>/path/to/your/local/repository</localRepository>
</settings>
You can use a different settings.xml if you don’t want to modify the default one in the .m2 directory, if you are using the Eclipse IDE. This is explained in the related section of this documentation.

1.2. Maven Archetypes

A set of maven archetypes is available to build server side extensions on top of the Cobalt Server:

  • Site Service Extension (com.eidosmedia.portal:archetype.web-base.extensions): extensions library (jar) archetype for the Site Service module

  • Directory Service Extension (com.eidosmedia.portal:archetype.directory.extensions): extensions library (jar) archetype for the Directory Service Module

  • Publication Service Notifier Extension(com.eidosmedia.portal:archetype.psn.extensions): custom publication notifications handler

  • Standalone Web Module (com.eidosmedia.portal:archetype.web-module): standalone web application (war) archetype with shared Cobalt libraries

1.2.1. Conventions for group and artifact names

When creating new java projects with maven, you are supposed to define a groupId and an artifactId. It is good to follow the guidelines provided on the Maven website to pick a consistent groupId and artifactId for your project.

For example in case of EidosMedia Cobalt projects, the prefix com.eidosmedia.cobalt should be used for the groupId. In case of a customer specific extension a domain controlled by the customer could be used as a prefix instead.

1.2.2. Using archetypes from command line

You can bootstrap new project using the archetypes from command line. For example:

mvn archetype:generate -DarchetypeGroupId=com.eidosmedia.portal -DarchetypeArtifactId=archetype.web-base.extensions -DarchetypeVersion=3.2022.03

1.2.3. Using archetypes in Intellij IDE

From the New Project menu, select Maven, toggle "Create from archetype" and select "Add Archetype…​"

intellij-archetypes
Figure 1. Create from archetype in Intellij

1.2.4. Using archetypes in Eclipse IDE

From the New Project menu, select Maven, "Maven Project", continue till the "Select an Archetype" window, select the local catalog, you created in the previous section, then select the archetype and hit "Next".

eclipse-archetype
Figure 2. Create from archetype in Eclipse

1.3. Postgres

Cobalt uses PostgreSQL as its main database. If you run a monolith (with all the services in the same installation) Cobalt instance locally, you will need a postgres to connect to.

1.3.1. Running Postgres with Docker

You can easily start a Postgres instance with Docker.

Create the postgres bash init-postgres.sh script:

#!/bin/bash
{ echo "# Cobalt specific configuration"; echo "listen_addresses='*'"; echo "max_prepared_transactions = 20"; echo "default_transaction_isolation = 'repeatable read'"; } >> "$PGDATA/postgresql.conf"

psql -v ON_ERROR_STOP=1 -d $POSTGRES_DB --username "$POSTGRES_USER" <<-EOSQL
    CREATE SCHEMA cobalt;
    ALTER ROLE $POSTGRES_USER in database $POSTGRES_DB SET search_path='cobalt';
    GRANT ALL PRIVILEGES ON SCHEMA cobalt TO $POSTGRES_USER;
    CREATE EXTENSION IF NOT EXISTS btree_gin WITH SCHEMA cobalt;
    CREATE SCHEMA directory;
    GRANT ALL PRIVILEGES ON SCHEMA directory TO cobalt;
    CREATE SCHEMA moderation;
    GRANT ALL PRIVILEGES ON SCHEMA moderation TO cobalt;
    CREATE SCHEMA comments;
    GRANT ALL PRIVILEGES ON SCHEMA comments TO cobalt;
    CREATE ROLE wf_user WITH PASSWORD 'wf_user' LOGIN;
    CREATE DATABASE workflow;
    ALTER ROLE wf_user in database workflow SET search_path='workflow';
    \connect workflow;
    CREATE SCHEMA workflow;
    GRANT ALL PRIVILEGES ON SCHEMA workflow TO wf_user;
    ALTER DATABASE workflow owner to wf_user;
EOSQL

Then start a postgres docker container mounting it as a bind volume:

docker run -d --name=postgres \
           -p 5432:5432 \
           -e POSTGRES_USER=cobalt \
           -e POSTGRES_PASSWORD=cobalt \
           -v $PWD/init-postgres.sh:/docker-entrypoint-initdb.d/init-postgres.sh postgres:12.6

1.3.2. Using your Postgres installation

If you installed postgres already, you need to configure it, to work with Cobalt.

Modify these settings of your Postgres installation by editing the postgresql.conf file:

max_prepared_transactions = 20
default_transaction_isolation = "repeatable read"

Open the psql console and run these commands to create database schemas and users:

CREATE ROLE cobalt WITH PASSWORD 'cobalt' LOGIN;
CREATE DATABASE cobalt;
ALTER ROLE cobalt in database cobalt SET search_path='cobalt';
GRANT ALL ON DATABASE cobalt TO cobalt;
\connect cobalt;
CREATE SCHEMA cobalt;
GRANT ALL ON SCHEMA cobalt TO cobalt;
CREATE EXTENSION IF NOT EXISTS btree_gin WITH SCHEMA cobalt;
CREATE SCHEMA directory;
GRANT ALL ON SCHEMA directory TO cobalt;
CREATE SCHEMA comments;
GRANT ALL ON SCHEMA comments TO cobalt;
CREATE SCHEMA moderation;
GRANT ALL ON SCHEMA moderation TO cobalt;
CREATE ROLE wf_user WITH PASSWORD 'wf_user' LOGIN;
CREATE DATABASE workflow;
ALTER ROLE wf_user in database workflow SET search_path='workflow';
\connect workflow;
CREATE SCHEMA workflow;
GRANT ALL PRIVILEGES ON SCHEMA workflow TO wf_user;
ALTER DATABASE workflow owner to wf_user;

Modify the cobalt.properties to reflect the host and port of your Postgres instance:

bitronix.resource.ds1.driverProperties.serverName=localhost
bitronix.resource.ds1.driverProperties.portNumber=5432

1.4. Running Elasticsearch with docker

Start a single node elastic cluster:

docker run -d --name=elastic
           -p 9200:9200 \
           -p 9300:9300 docker.elastic.co/elasticsearch/elasticsearch:6.8.10

1.5. Connecting to a remote Cobalt instance

You can connect your local development server to a remote Cobalt instance to use production data.

Enable connection to a remote repository in cobalt.properties:

repository.remote=true

Edit the name of the repository in your local cobalt.properties to reflect the one inside the remote server cobalt.properties:

repository.name=repo-local

You also need to configure these properties accordingly with your remote installation:

common.global.zone=global
common.defaults.domain=default
common.defaults.zone=default
common.defaults.realm=default

service.discovery.localZone=default
service.discovery.globalZone=global

1.5.1. Using a remote user connector

Cobalt relies on a component called UsersServiceConnector to handle authentication and authorization.

The default implementation of this connector connects to a local postgres instance using the same properties used by the directory service configuration:

directory.persistence.postgresql.jdbcUrl=jdbc:postgresql://localhost:5432/cobalt?currentSchema=directory
directory.persistence.postgresql.userName=cobalt
directory.persistence.postgresql.password=cobalt

When you are connecting to a remote instance, usually you will connect to the directory service of the remote instance, and you will not need to have Postgres locally.

To switch to the remote implementation of the connector you will need to add:

users.service.connector.type=remote

and be sure you can discover a remote directory service instance as explained in the following section.

1.5.2. Configuring the remote services

You need your local server to connect to remote Cobalt services.

If possible (you reach the same network) you can attach to the service discovery registry of the remote instance to discover all remote services:

service.discovery.connectString=<zookeeper-host>:<zookeeper-port>
service.discovery.serviceDiscoveryName=zookeeper

Your own service will be registered in the same registry of the remote instance, and will be discoverable, unless you specify:

service.discovery.register=false

alternatively you can manually register the services in the cobalt.properties:

service.discovery.service.cobalt-repo.domain=${common:common.defaults.domain}
service.discovery.service.cobalt-repo.zone=${common:common.defaults.zone}
service.discovery.service.cobalt-repo.type=repository
service.discovery.service.cobalt-repo.repository=${common:repository.name}
service.discovery.service.cobalt-repo.uri=http://my-remote-server:8480/repository.rest
service.discovery.service.cobalt-directory.domain=${common:common.defaults.domain}
service.discovery.service.cobalt-directory.zone=${common:common.global.zone}
service.discovery.service.cobalt-directory.type=directory
service.discovery.service.cobalt-directory.uri=http://my-remote-server:8480/directory
service.discovery.service.cobalt-directory.realm=${common:common.defaults.realm}
service.discovery.service.cobalt-cobaltpub.domain=${common:common.defaults.domain}
service.discovery.service.cobalt-cobaltpub.zone=${common:common.defaults.zone}
service.discovery.service.cobalt-cobaltpub.type=cobaltpub
service.discovery.service.cobalt-cobaltpub.uri=http://my-remote-server:8480/cobaltpub

In the snippet above, we registered the remote repository.rest, cobaltpub, and directory service remote instances.

To avoid running also locally the services, it is better to remove the corresponding context configurations from the conf/Catalina/localhost folder.

In many cases we will be developing mainly the site service, so we can comment out or delete all the xml files, except ROOT.xml

1.6. Docker

A docker container is available for development. You can login to the docker repository artifactory.eidosmedia.sh/docker with your credentials provided by EidosMedia and pull the container:

docker login artifactory.eidosmedia.sh/docker
docker pull artifactory.eidosmedia.sh/docker/cobalt:3.2022.03

Running the container locally is as simple as:

docker run --name cobalt \
           -p 8480:8480 \
           -p 8843:8843 \
           -e POSTGRES_HOST={postgres-host} \
           -e repository.search.elasticClientNodes={elastic-hosts} \
           -e repository.search.elasticClusterName=docker-cluster
           artifactory.eidosmedia.sh/docker/cobalt:3.2022.03

replacing {postgres-host} with the actual postgres host and {elastic-hosts} with the comma separated list of elastic nodes. The name for a docker container elastic is usually docker-cluster; if not, be sure to replace the correct name.

1.6.1. Using configuration properties

You can set any configuration properties available in the cobalt.properties file, through environment variables. For example, to specify the postgres host, instead of POSTGRES_HOST variable, you could use :

docker run --name cobalt \
           -e bitronix.resource.ds1.driverProperties.serverName={postgres-host} \
           ...
           artifactory.eidosmedia.sh/docker/cobalt:3.2022.03

1.6.2. Exposing configuration, data and source directories

To expose the configuration, data, and source directories, just mount the corresponding directories, onto the directories you prefer on your host. At the first run, the container will synchronize the folders, if it founds them empty or partially empty (the priority is given to files on the host).

docker run --name cobalt \
           -v <your-conf-folder>:/cobalt-dist/conf \
           -v <your-src-folder>:/cobalt-dist/src \
           -v <your-data-folder>:/cobalt-dist/data \
           artifactory.eidosmedia.sh/docker/cobalt:3.2022.03

1.6.3. Running a subset of cobalt services

When you are developing you will usually run only the service you need to extend, or only the site service in case you are developing themes or front-end in general:

docker run --name cobalt \
           -e COBALT_SERVICES=site
           artifactory.eidosmedia.sh/docker/cobalt:3.2022.03

1.6.4. Debugging the container

To start cobalt in debug mode:

docker run --name cobalt \
           -p 8480:8480 \
           -p 8843:8843 \
           -p 10020:10020 \
           artifactory.eidosmedia.sh/docker/cobalt:3.2022.03 cobalt.sh run-debug

you can then attach in debug to the socket opened at port 10020.

1.6.5. Development docker-compose

It’s easy to develop extensions by running a docker cobalt container and mounting the compiled class files.

Follows a docker-compose file, preconfigured to connect to a remote cobalt instance:

version: '3'
services:
  cobalt:
    image: artifactory.eidosmedia.sh/docker/cobalt:3.2022.03
    ports:
      - "80:8480"
      - "10020:10020" # JPDA port for remote debugging
    volumes:
      # Mount the the site service extension compiled classes
      - ./target/classes:/cobalt-dist/extra/classes/site
      # Mount the the directory service extension compiled classes
      #- /path/to/directory/service/extension.jar:/cobalt-dist/extra/classes/directory
    environment:
      - COBALT_SERVICES=site # enable also directory, if you need to develop extensions for it
      - users.service.connector.type=remote
      - repository.remote=true
      - repository.name=cobalt-dev # must be the same repository name of the remote server
      - service.discovery.service.cobalt-repo.domain=$${common:common.defaults.domain}
      - service.discovery.service.cobalt-repo.zone=$${common:common.defaults.zone}
      - service.discovery.service.cobalt-repo.type=repository
      - service.discovery.service.cobalt-repo.repository=$${common:repository.name}
      - service.discovery.service.cobalt-repo.uri=http://host.docker.internal:9000/repository.rest
      - service.discovery.service.cobalt-directory.domain=$${common:common.defaults.domain}
      - service.discovery.service.cobalt-directory.zone=$${common:common.global.zone}
      - service.discovery.service.cobalt-directory.type=directory
      - service.discovery.service.cobalt-directory.uri=http://host.docker.internal:9000/directory
      - service.discovery.service.cobalt-directory.realm=$${common:common.defaults.realm}
    command: ["cobalt.sh", "run-debug"]

In the docker-compose.yml you need also to configure the same repository name of the remote server:

environment:
    - repository.name=cobalt-dev

You need to configure the compose to start only the services you need to extend, for example:

environment:
    - COBALT_SERVICES=site,directory

You can develop and automatically deploy your extensions by mounting the compiled .class files in the corresponding folder.

For directory service extensions:

volumes:
    - ./target/classes:/cobalt-dist/extra/classes/site

For site service extensions:

volumes:
    - ./target/classes:/cobalt-dist/extra/classes/directory

This assumes you are using Maven (or Gradle), the compiled classes are under ./target/classes/ folder, and your docker-compose.yaml file is in the root directory of your extension project. Be sure to enable automatic build features of your IDE, so that your classes are always up to date with new code.

1.7. docker-compose.yaml file in archetype.web-base.extensions

The com.eidosmedia.portal:archetype.web-base.extensions:3.2022.03 archetype has already a preconfigured docker-compose.yaml file to bootstrap a local Cobalt instance with Postgres and ElasticSearch in the root of the project.

The file is already preconfigured to share .class files with the Cobalt container.

You can start and modify this file as explained to connect to a remote instance.

1.8. Working in Visual Studio Code

With Visual Studio Code you can use the Docker plugin and configuring a remote debugging to a dockerized instance of cobalt.

This part of the guide assumes one already has an extensions project in the Visual Studio Code workspace, generated from one of the archetypes described in the Maven Archetypes section.

It’s better to set Hot Code Replace option to auto, in order to speed up development.

Open Settings tab and search for java to find Java Debug section:

hot-code-replace
Figure 3. Set Hot Code Replace to auto

1.8.1. Create a new Debug configuration

Open Run and Debug tab to add a new debug configuration and click create a launch.json file:

debug-configuration
Figure 4. Create a new Debug configuration

The new debug configuration should look like this:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "java",
            "name": "Cobalt",
            "request": "attach",
            "hostName": "localhost",
            "port": 10020
        }
    ]
}

1.8.2. Run and debug

Right click docker-compose.yaml file and then Compose Up to boostrap the environment:

run-env
Figure 5. Run the environment

As an alternative you can use docker-compose commands from the terminal:

docker compose up -d

And then to monitor cobalt logs:

docker compose logs -f cobalt

To attach and debug the environment open the Run and Debug tab and run the created Cobalt configuration:

debug-env
Figure 6. Debug the environment

1.9. Working in the IntelliJ IDE

Better way of developing Cobalt extensions with the IntelliJ IDE, is using the Docker plugin and configuring a remote debugging to a dockerized instance of cobalt.

This part of the guide assumes one already has an extensions project in the IntelliJ workspace, generated from one of the archetypes described in the Maven Archetypes section.

It’s better to enable automatic build options of IntelliJ as explained at this link.

1.9.1. Installing the Docker plugin

Navigate to IntelliJ Preferences, select Plugins and Install JetBrains plugin…​:

install-docker-plugin
Figure 7. Install Docker Plugin

Search for Docker in the next window and install the Docker Integration plugin:

install-docker-plugin-2
Figure 8. Install Docker Integration

1.9.2. Docker Run/Debug Configuration

On the top-right corner of the IDE, select the Edit Configurations menu button:

new-configuration
Figure 9. New Configuration

Hit the + button to create a new Docker→Docker-compose configuration:

new-docker-configuration
Figure 10. New Docker Configuration

In the Compose file(s) input, reference the docker-compose.yml file from the Development docker-compose section.

Remember to change the reference to the remote Cobalt instance, if needed, as exaplained in Development docker-compose section.

Remember to change the volume mount in the docker-compose file, to mount the .class files in right extension folder:

  • /cobalt-dist/extra/classes/site/ for Site Service extensions

  • /cobalt-dist/extra/classes/directory for Directory Service extensions

volumes:
    # Mount the the site service extension .class files to be deployed in /cobalt-dist/extra/classes/site folder as in this example
    - .target/classes:/cobalt-dist/extra/classes/site
    # Mount the the directory service extension .class files to be deployed in /cobalt-dist/extra/lib/directory folder as in this example
    - .target/classes:/cobalt-dist/extra/classes/directory

The newly created configuration can be selected from the drop-down menu on the top-right corner of the workspace and started by hitting "play" icon.

1.9.3. New debug configuration

By default, the docker-compose file listens to the 10020 port. A remote debug configuration can attach to that port and debug directly from the IntelliJ IDE.

On the top-right corner of the IDE, select the Edit Configurations menu button, and create a new Remote Debug configuration, edit the port to be 10020 and select your project in the Select sources using module’s classpath drop-down:

remote-debug-configuration
Figure 11. Remote debug configuration
remote-debug-configuration-2
Figure 12. Remote debug configuration 2

With the docker-compose setup already running, you can attach in debug by selecting and starting the new remote debug configuration from the drop-down menu on the top-right corner of the workspace.

1.10. Working in the Eclipse IDE

A set of plugins to ease development in the Eclipse IDE is available.

1.10.1. Installing the Eclipse plugins

You need to use the Install New Software…​ feature and install the plugins from the following update sites:

install-cobalt-sdk
Figure 13. Install Cobalt SDK Plugin

1.10.2. Downloading the Cobalt Distribution

You should download the com.eidosmedia.portal:cobalt:3.2022.03:dist artifact from the repository provided by EidosMedia, picking the latest version available, and unpack it in a directory of you choice.

1.10.3. Creating a new Cobalt Server

From the File → New → Other menu select Server and click next.

new-server
Figure 14. New Server

Select Cobalt Server 1.0 in the EidosMedia S.p.A. folder, name it as you prefer and click next.

new-server-2
Figure 15. Cobalt Server

Select Browse…​ to link the server to Cobalt distribution downloaded before.

new-server-3
Figure 16. Cobalt SDK Folder

On the Project Explorer window you can see the Server configuration files copied from the cobalt-dist. You can edit this files without impacting the original installation.

If you double click on the newly create server in the Servers window you can edit the Cobalt’s data and source folder.
cobalt-server-configuration
Figure 17. Cobalt Server Configuration

Since you will develop extensions only on a few services, is better to disable the ones you don’t need, to reduce the server startup time.

You need to delete or comment out, by replacing the .xml extension with anything else, the contexts, of the underlying Tomcat, you don’t want to start. These contexts are located in the folder conf/Catalina/localhost:

disabling-services
Figure 18. Disabling unused services

NOTE (important for Windows users): In case you’ll find some access denied errors, mainly related with temporary file creation, you have to change your JVM temporary directory. To do so, you have to add the following parameter to the VM arguments as shown in the following screenshot.

-Djava.io.tmpdir=C:/dev/tmp
eclipse-java-io-tmpdir
Figure 19. VM arguments for Cobalt configuration

1.10.4. Importing your local maven configuration in the Eclipse IDE

Eclipse, by default, uses the maven settings.xml in the ${user.home}/.m2 directory of your system. If you want to use a different configuration file open Eclipse preferences and go to to Maven→User Settings window, and set the file accordingly in the User Settings input form:

settings-maven-eclipse
Figure 20. Eclipse maven configuration

1.10.5. Importing the archetype-catalog in the Eclipse IDE

Create the following file in a directory of your choice:

<archetype-catalog xsi:schemaLocation="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-catalog/1.0.0 http://maven.apache.org/xsd/archetype-catalog-1.0.0.xsd"
    xmlns="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-catalog/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <archetypes>
    <archetype>
      <groupId>com.eidosmedia.portal</groupId>
      <artifactId>archetype.directory.extensions</artifactId>
      <version>3.2022.03</version>
      <description>archetype.directory.extensions</description>
    </archetype>
    <archetype>
      <groupId>com.eidosmedia.portal</groupId>
      <artifactId>archetype.web-base.extensions</artifactId>
      <version>3.2022.03</version>
      <description>archetype.web-base.extensions</description>
    </archetype>
    <archetype>
      <groupId>com.eidosmedia.portal</groupId>
      <artifactId>archetype.web-module</artifactId>
      <version>3.2022.03</version>
      <description>archetype.web-module</description>
    </archetype>
    <archetype>
      <groupId>com.eidosmedia.portal</groupId>
      <artifactId>archetype.psn.extensions</artifactId>
      <version>3.2022.03</version>
      <description>archetype.psn.extensions</description>
    </archetype>
  </archetypes>
</archetype-catalog>

Replace the versions with the version of cobalt you downloaded earlier.

Open Eclipse preferences and go to Maven→Archetypes window, click Add Local Catalog…​ to add the file you just created.

local-catalog
Figure 21. Add local archetype-catalog.xml to eclipse

1.10.6. Developing and deploying a Site Service Extension library

Create a new project using the com.eidosmedia.portal:archetype.web-base.extensions:

web-base-extension-eclipse
Figure 22. Create Site service extensions archetype

Select appropriate values for the groupId, artifactId, and version number. Eventually you could override the suggested package if needed.

configuring-maven-project
Figure 23. Configuring the project

If it is the first time you create a Cobalt extensions project, It will take a bit to download dependencies and build the project, before it is available and ready in the Project Explorer window.

To deploy your app use the eclipse Add and Remove feature, by right-clicking the Cobalt server:

add-and-remove
Figure 24. Add module to server 1

Select your project, click Add and then Finish

add-module-to-server
Figure 25. Add module to server 2

You can now run or debug your server.

1.10.7. Developing and deploying a Directory Service Extension library

Create a new project using the com.eidosmedia.portal:archetype.directory.extensions:

directory-extension-eclipse
Figure 26. Create Directory service extensions archetype
directory-extension-eclipse02
Figure 27. Create Directory service extensions archetype

After choosing the correct archetype you must fill the fields into the dialog window. The properties allow you to choose the name of the classes that will be created:

  • resourceName: Specifies the name of the class that contains the REST endpoints.

  • connectorName: Specifies the name of the class that contains the connector’s logic.

  • connectorKey: Specifies the value of a constant used to identify the connector into the directory module.

  • dataName: Specifies the name of the class that contains the bean class.

  • resourcePath: Specifies the REST endpoint.

Deploy your app using the eclipse Add and Remove feature as explained in Developing and deploying a Site Service Extension library

1.10.8. Developing and deploying a Web Module

You can run web modules on Cobalt as you would normally do with any J2EE application server or servlet container.

By running web modules on the Cobalt Server you get access to a set of shared resources, granting single sign on, and automatic registration and discovery of modules.

Use the maven archetype to create a new Cobalt Web Module:

maven-archetype-web-module
Figure 28. Web Module Maven Archetype

This will create a new maven web-module with a pre-configured pom.

Add the module to the server with its Add and Remove feature:

add-module
Figure 29. Add module to the server

This will create a sample context xml entry in the conf/Catalina/localhost folder of the server configuration. Remove the .sample suffix and name the context as the context path you desire:

deploy-web-module
Figure 30. Deploy module to the server

You can start developing your own service using Cobalt shared resources.

2. Common Development Patterns

2.1. Logging

Cobalt uses slf4j interfaces for logging. It’s a common practice to create a static Logger instance on top of your class:

private static Logger logger = LoggerFactory.getLogger(YourClass.class);

2.2. Dependency Injection

Cobalt relies internally on dependency injection frameworks. When developing extensions, for the Site, Directory and other Cobalt services, you are able to inject core components or utility directly in your implementations of the extension interface we provide.

2.2.1. Jersey Client

You can inject a preconfigured Jersey Client instance to perform request to web services:

private Client client;

@Inject
public void setClient(Client client) {
    this.client = client;
}

Or you can customize the configuration:

@Inject
public void setClient(Client client) {
    this.client = client.property(ClientProperties.CONNECT_TIMEOUT, 3000)
            .property(ClientProperties.READ_TIMEOUT, 3000);
}

2.2.2. Jackson ObjectMapper

You can inject a preconfigured Jackson ObjectMapper to work with json:

private ObjectMapper mapper;

@Inject
public void setObjectMapper(Object mapper) {
    this.mapper = mapper;
}

2.2.3. ServiceDiscoveryClient

You can inject a com.eidosmedia.portal.servicediscovery.ServiceDiscoveryClient instance and use its methods to discovery other services:

private ServiceDiscoveryClient sdClient;

@Inject
public void setServiceDiscoveryClient(ServiceDiscoveryClient sdClient) {
    this.sdClient = sdClient;
}

To discover the url of the MODERATION service, for example:

String url = sdClient.getServiceUri(ServiceType.MODERATION);

This assumes that one or more MODERATION services are available in the same zone/domain of the service calling the method.

Zone and Domain depends on the specific deployment and how cobalt services are distributed in your environment.

To specify where to look, for the service URL lookup, you need to use:

String url = sdClient.getServiceUri(domain, zone, ServiceType.MODERATION, null);

to specify the coordinates of the requested service.

2.2.4. ServiceInfo

A com.eidosmedia.portal.ServiceInfo bean is available to get information on the running service.

private ServiceInfo info;

@Inject
public void setServiceInfo(ServiceInfo info) {
    this.info = info;
}

3. Publication Process Customization

3.1. Define new custom types

Cobalt is released with a set of predefined types. This list is visible through the REST API /core/types.

You can create your custom own types, and then reuse them in the publishing process.

Suppose you want to create a new type to publish video stories. However, being articles we want to keep article as their base type, and define the new type videostory.

To do this, edit the src/pub/custom-types.xml file by adding this line to the <CustomTypes /> block:

<Type typeName="videostory" baseType="article" label="Video story" description="This is an article with lots of videos inside" icon="videostory.icon" />

Cobalt will update the list of hot types without having to restart the server. Wait a few moments and the new type will be usable in your new publications.

If you want to create a custom base type, you have to leave empty the definition of the baseType inside the <Type /> block.

<Type typeName="recipe" baseType="" label="Recipe" description="Recipe" icon="recipe.icon" />

3.2. Publish from an external source

You can publish from you custom own repository, by using the APIs of the Publication Service (exposed by the Cobalt Core Service).

The main steps you have to do are the following:

  1. Make a login to the Directory Service to get a session token (emauth). You’ll use this token to autheticate yourself on the next calls

  2. Start a new publication to get a new publication id

  3. Upload all the resource involved in the publication (with their content and metadata)

  4. If all the the uploads succeed, commit the publication, otherwise abort/rollback it

  5. Logout from your opened session

3.2.1. Login

Send username and password, to get back the session id (emauth).

Request (/directory/sessions/login)
POST /directory/sessions/login HTTP/1.1
Host: localhost:8480
Content-Type: application/json
Cache-Control: no-cache

{
	"name": "aleber",
	"password": "al3b3r"
}
Response
{
    "user": {
        "type": "USER",
        "created": "2018-07-29T14:04:27.848Z",
        "creatorId": "32b3c7f6-072b-4ab8-adfc-27ae06d5ff52",
        "lastModifierId": "32b3c7f6-072b-4ab8-adfc-27ae06d5ff52",
        "modified": "2018-07-29T14:04:27.848Z",
        "version": "1.0",
        "name": "aleber",
        "alias": "aleber",
        "role": "USER",
        "status": "ENABLED",
        "lastLogin": "2018-07-29T14:05:26.144Z",
        "bookmarks": [],
        "id": "7c785cb3-97e9-4eac-a531-9b1902528cb9"
    },
    "session": {
        "created": "2018-07-29T14:05:26.144Z",
        "creatorId": "7c785cb3-97e9-4eac-a531-9b1902528cb9",
        "lastModifierId": "7c785cb3-97e9-4eac-a531-9b1902528cb9",
        "modified": "2018-07-29T14:05:26.144Z",
        "version": "1.0",
        "ip": "127.0.0.1",
        "lastAccess": "2018-07-29T14:05:26.144Z",
        "rememberMe": false,
        "id": "93995770-8cbe-4c76-9536-ea9cc079a461"
    }
}

The emauth is the value in session.id, in this case "93995770-8cbe-4c76-9536-ea9cc079a461".

3.2.2. Generate publication id (prepare)

Send a request to get a new publication id. In the publication you can include all the objects that are involved, this means that with a single publication you can upload complex objects, like a web page with many articles.

Request (/core/publication/prepare)
POST /core/publication/prepare?emauth=f40ca28e-b5bc-4b2c-9743-3c21a2cd0ee5 HTTP/1.1
Host: localhost:8480
Content-Type: application/json
Cache-Control: no-cache

{
	"sites": ["test-site"],
	"refs": [{
		"reference": "art0-ref",
		"source": "art0-source"
	}],
	"prepareInfo": {
		"publishSource": "my-source",
		"publishDate": "2018-07-27T12:13:14.156Z"
	}
}

reference is a unique key for your publish source. This gives you the possibility to update an already published content (using the same reference on another publication).

Response
{
    "publishId": "0247-0a4286346f49-8fe1435ff964-0003",
}

You’ll need this publication id (publishId) to do the other publication steps.

3.2.3. Update contents (update)

For all the elements/contents involved in the publication, you need to make a separate call to the /core/publication/update endpoint.

Each call is a multipart/formdata POST containing at least two parts:

  • data: a json with all the metadata of the content (and the references to all the other files)

  • content: the binary file of the content

For example suppose we want to add a simple XML content.

Request (/core/publication/update)
POST /core/publication/update?emauth=52a3cf82-9b81-46a3-a23a-fe1250234c6a&amp;pubId=0247-0a42727e4f1a-7fb0caac81f7-0003 HTTP/1.1
Host: localhost:8480
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Cache-Control: no-cache

------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="data"; filename="art0.json"
Content-Type: application/json


------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="content"; filename="art0.xml"
Content-Type: text/xml


------WebKitFormBoundary7MA4YWxkTrZu0gW--
art0.json
{
	"dataType": "node",
	"foreignId": "art0-ref",
	"title": "my title art0",
	"summary": "my summary",
	"authors": ["ale.ber"],
	"sys": {
		"baseType": "article",
		"type": "article"
	},
	"attributes": {
		"custom": {
			"myattr": "myval"
		},
		"tags": ["t1"]
	},
	"files": {
		"content": {
			"fileName": "art0.xml",
			"partName": "content",
			"mimeType": "text/xml",
			"data": null
		}
	},
	"pubInfoEx": {
		"test-site": {
			"siteName": "test-site",
			"sectionPath": ""
		}
	}
}
art0.xml
<?xml version="1.0" encoding="UTF-8"?>
<document>
	<headgroup>
		<headline>
			<p>My title</p>
		</headline>
	</headgroup>
	<summary>
		<p>My summary</p>
	</summary>
	<byline>
		<p>ale.ber</p>
	</byline>
	<content>
		<p>Lorem ipsum</p>
	</content>
</document>
Response

In case of success the response is an empty JSON, the status code is 200.

3.2.4. Commit the publication (publish)

Request (/core/publication/publish)
POST /core/publication/publish?emauth=f40ca28e-b5bc-4b2c-9743-3c21a2cd0ee5&amp;pubId=0247-0a42727e4f1a-7fb0caac81f7-0003 HTTP/1.1
Host: localhost:8480
Content-Type: application/json
Cache-Control: no-cache
Response

In case of success the response is an empty JSON, the status code is 200.

3.2.5. Abort the publication

Request (/core/publication/abort)
POST /core/publication/abort?emauth=f40ca28e-b5bc-4b2c-9743-3c21a2cd0ee5&amp;pubId=0247-0a42727e4f1a-7fb0caac81f7-0003 HTTP/1.1
Host: localhost:8480
Content-Type: application/json
Cache-Control: no-cache
Response

In case of success the response is an empty JSON, the status code is 200.

3.2.6. One publication with many files correlated

If you want to make a publication with many contents in it, you need to adjust a little bit what we have done in the previous steps.

Suppose we want to publish an article (art1.xml) with a main image (main.jpg) and also an image inside its body (body.jpg).

In the prepare step you have to declare all the three items.

Prepare with multiple files
POST /core/publication/prepare?emauth=d8671b29-1b62-48c1-99fb-27649460aff8 HTTP/1.1
Host: localhost:8480
Content-Type: application/json
Cache-Control: no-cache
Postman-Token: db3ae370-f30c-4d3d-a765-bdfec3e194c3

{
	"sites": ["test-site"],
	"refs": [{
		"reference": "art1-ref",
		"source": "art1-source"
	}, {
		"reference": "main-img-ref",
		"source": "main-img-source"
	}, {
		"reference": "body-img-ref",
		"source": "body-img-source"
	}],
	"prepareInfo": {
		"publishSource": "my-source",
		"publishDate": "2018-07-27T12:13:14.156Z"
	}
}

As before, the response will contains the publishId.

Then you can upload all the resources, one by one.

Update body image
POST /core/publication/update?emauth=b7562682-e096-4383-8cf9-fd1786c96bf5&amp;pubId=0247-0a42bfe06e30-5cc32edd8977-0003 HTTP/1.1
Host: localhost:8480
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Cache-Control: no-cache

------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="data"; filename="main-img.json"
Content-Type: application/json


------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="content"; filename="main.jpg"
Content-Type: image/jpeg


------WebKitFormBoundary7MA4YWxkTrZu0gW--
body.img.json
{
	"foreignId": "body-img-ref",
	"title": "body image",
	"summary": "body image",
	"authors": ["ale.ber"],
	"sys": {
		"baseType": "image",
		"type": "image"
	},
	"attributes": {
		"custom": {}
	},
	"files": {
		"content": {
			"fileName": "body-img.jpg",
			"partName": "content",
			"mimeType": "image/jpeg",
			"data": null
		}
	},
	"pubInfoEx": {
		"test-site": {
			"siteName": "test-site",
			"sectionPath": ""
		}
	}
}
Update main image
POST /core/publication/update?emauth=b7562682-e096-4383-8cf9-fd1786c96bf5&amp;pubId=0247-0a42bfe06e30-5cc32edd8977-0003 HTTP/1.1
Host: localhost:8480
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Cache-Control: no-cache
Postman-Token: e2576f16-841c-455c-aea0-3b1954c35278

------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="data"; filename="body-img.json"
Content-Type: application/json


------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="content"; filename="body.jpg"
Content-Type: image/jpeg


------WebKitFormBoundary7MA4YWxkTrZu0gW--
main.img.json
{
	"foreignId": "main-img-ref",
	"title": "main image",
	"summary": "main image",
	"authors": ["ale.ber"],
	"sys": {
		"baseType": "image",
		"type": "image"
	},
	"attributes": {
		"custom": {}
	},
	"files": {
		"content": {
			"fileName": "main-img.jpg",
			"partName": "content",
			"mimeType": "image/jpeg",
			"data": null
		}
	},
	"pubInfoEx": {
		"test-site": {
			"siteName": "test-site",
			"sectionPath": ""
		}
	}
}
Update article
POST /core/publication/update?emauth=b7562682-e096-4383-8cf9-fd1786c96bf5&amp;pubId=0247-0a42bfe06e30-5cc32edd8977-0003 HTTP/1.1
Host: localhost:8480
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Cache-Control: no-cache
Postman-Token: 6f02cffb-a216-42de-a782-48ca60218b52

------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="data"; filename="art1.json"
Content-Type: application/json


------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="content"; filename="art1.xml"
Content-Type: text/xml


------WebKitFormBoundary7MA4YWxkTrZu0gW--
art1.json
{
	"foreignId": "art1-ref",
	"title": "my title art1",
	"summary": "my summary",
	"authors": ["ale.ber"],
	"sys": {
		"baseType": "article",
		"type": "article"
	},
	"attributes": {
		"custom": {}
	},
	"links": {
		"hyperlink": {
			"image": [{
				"foreignId": "body-img-ref"
			}]
		},
		"system": {
			"mainPicture": [{
				"foreignId": "main-img-ref"
			}]
		}
	},
	"files": {
		"content": {
			"fileName": "content.xml",
			"partName": "content",
			"mimeType": "text/xml",
			"data": null
		}
	},
	"pubInfoEx": {
		"test-site": {
			"siteName": "test-site",
			"sectionPath": ""
		}
	}
}

In the links node you can define the links between the content and the used images.

art1.xml
<?xml version="1.0" encoding="UTF-8"?>
<document>
	<headgroup>
		<headline>
			<p>My title</p>
		</headline>
	</headgroup>
	<mediagroup>
		<figure>
			<img src="cobaltextref:main-img-ref?format=content" />
		</figure>
	</mediagroup>
	<summary>
		<p>My summary</p>
	</summary>
	<byline>
		<p>ale.ber</p>
	</byline>
	<content>
		<p>
			Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce tempor malesuada elit, dapibus eleifend dolor ultricies non. Donec malesuada nisl lorem, id luctus ipsum hendrerit vitae. Nam efficitur ex sed nibh blandit, eget sagittis mauris euismod. Etiam vitae congue metus, non pharetra sem.
		</p>
		<figure>
			<img src="cobaltextref:body-img-ref?format=content" />
		</figure>
		<p>
			Etiam eros purus, porttitor et placerat quis, auctor in dui. Sed enim erat, lobortis vitae posuere tempor, feugiat laoreet lorem. Etiam a sollicitudin turpis. Maecenas sit amet tincidunt nibh. Nulla facilisi. Nullam diam odio, aliquam at ultrices eu, congue sed quam. Curabitur id pulvinar nisi, id egestas ligula. Mauris luctus feugiat enim eget ultrices.
		</p>
	</content>
</document>

In the XML content, you can refer to the linked images with their external references. Cobalt will convert them into Cobalt ids during the publication.

3.2.7. Upload

Create/update a node with all its related files in a single call. It follows the same flow of prepare + update + publish/abort we have seen in the previous paragraphs.

Request (/core/publication/upload)
POST /core/publication/upload?emauth=52a3cf82-9b81-46a3-a23a-fe1250234c6a HTTP/1.1
Host: localhost:8480
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Cache-Control: no-cache

------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="data"; filename="art0.json"
Content-Type: application/json


------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="content"; filename="art0.xml"
Content-Type: text/xml


------WebKitFormBoundary7MA4YWxkTrZu0gW--
art0.json
{
	"dataType": "node",
	"foreignId": "art0-ref",
	"title": "my title art0",
	"summary": "my summary",
	"authors": ["ale.ber"],
	"sys": {
		"baseType": "article",
		"type": "article"
	},
	"attributes": {
		"custom": {
			"myattr": "myval"
		},
		"tags": ["t1"]
	},
	"files": {
		"content": {
			"fileName": "art0.xml",
			"partName": "content",
			"mimeType": "text/xml",
			"data": null
		}
	},
	"pubInfoEx": {
		"test-site": {
			"siteName": "test-site",
			"sectionPath": ""
		}
	}
}
art0.xml
<?xml version="1.0" encoding="UTF-8"?>
<document>
	<headgroup>
		<headline>
			<p>My title</p>
		</headline>
	</headgroup>
	<summary>
		<p>My summary</p>
	</summary>
	<byline>
		<p>ale.ber</p>
	</byline>
	<content>
		<p>Lorem ipsum</p>
	</content>
</document>

In case of success the response is a JSON containing the uploaded node information (com.eidosmedia.portal.api.values.NodeData object), the status code is 200.

3.3. Hooks for publication events

Cobalt has a dedicated service to notify publication events: PSN - Publication Service Notifier.

The events that are notified are of three types:

  • created content

  • modified content

  • deleted content

Cobalt also comes with two example implementations: one will write the notification information into the logger, the other will send the noditification information to the configured endpoint as an HTTP POST call.

These are very trivial and probably useless implementations in a real case, but you can use them as a starting point to create your implementations.

PSN can be configured from its conf/cobalt/psn.xml file. In this file you can define all the subscribers you need. For each subscriber you can define:

  • a list of endpoints

  • a comma separated list of the sites you are interested in (or * for all sites)

  • a comma separated list of the types you are interested in (or * for all the types)

For example:

<Subscribers>
    <Subscriber id="test" sites="*" types="article">
        <Endpoint em-type="com.eidosmedia.portal.psn.endpoint.ConsoleEndpoint" />
        <Endpoint url="http://www.example.com/my/service">
            <Header name="X-Powered-By"
                values="Cobalt Publication Service Notifier" />
        </Endpoint>
    </Subscriber>
</Subscribers>

3.3.1. HTTP POSTs Notifications

To add a new HTTP POST endpoint, you have to add an <Endpoint /> to a <Subscriber />.

For example:

<Subscriber id="test" sites="*" types="article">
    <Endpoint url="http://www.example.com/my/service">
        <Header name="X-Powered-By"
            values="Cobalt Publication Service Notifier" />
    </Endpoint>
</Subscriber>

This will make an HTTP POST call everytime an article will be published, updated or unpublished (for all the sites handled by Cobalt). You can add custom headers by configuration, as shown on the previous example.

The body of the request will contains the notification data in a JSON format. For example, when you publish an article you’ll get a body like the following:

{
  "siteName" : "test-site",
  "created" : [ {
    "publishDate" : "2018-07-27T07:06:46.318Z",
    "parentNodeId" : "4000-0a3603750a8b-042974932c13-2000",
    "nodeId" : "0247-0a41cee84fe3-b3333a23fd9f-1000",
    "title" : "This is a test article",
    "summary" : "This article is used to test PSN",
    "type" : "article",
    "path" : "/",
    "visible" : true,
    "url" : "http://www.site.test:8480/0247-0a41cee84fe3-b3333a23fd9f-1000/index.html"
  } ],
  "updated" : [ ],
  "deleted" : [ ]
}

If you update the same article, you get a notification like the following:

{
  "siteName" : "test-site",
  "created" : [ ],
  "updated" : [ {
    "publishDate" : "2018-07-27T07:08:41.563Z",
    "parentNodeId" : "4000-0a3603750a8b-042974932c13-2000",
    "nodeId" : "0247-0a41cee84fe3-b3333a23fd9f-1000",
    "title" : "This is a test article, updated",
    "summary" : "This article is used to test PSN",
    "type" : "article",
    "path" : "/",
    "visible" : true,
    "url" : "http://www.site.test:8480/0247-0a41cee84fe3-b3333a23fd9f-1000/index.html"
  } ],
  "deleted" : [ ]
}

Finally, if you unpublish this article from Cobalt (you can keep it on the editorial side), you’ll get a notification like the following:

{
  "siteName" : "test-site",
  "created" : [ ],
  "updated" : [ ],
  "deleted" : [ {
    "publishDate" : "2018-07-27T07:10:11.186Z",
    "parentNodeId" : "4000-0a3603750a8b-042974932c13-2000",
    "nodeId" : "0247-0a41cee84fe3-b3333a23fd9f-1000",
    "title" : "This is a test article, updated",
    "summary" : "This article is used to test PSN",
    "type" : "article",
    "path" : "/",
    "visible" : true
  } ]
}

Obviously, in the three arrays (created, updated and deleted) you have only one items because I’m publishing an article with an image, and I declare an Subscriber for only article.

If I change my subscriber settings to handle all types, I would have 2 items in the created array (one article and one image). Something like that:

{
  "siteName" : "test-site",
  "created" : [ {
    "publishDate" : "2018-07-27T07:17:50.597Z",
    "parentNodeId" : "4000-0a3603750a8b-042974932c13-2000",
    "nodeId" : "0247-0a41d2ded0dc-e84b446fd375-1000",
    "title" : "This is an image used for PSN test",
    "summary" : "",
    "type" : "image",
    "path" : "/",
    "visible" : true,
    "url" : "http://www.site.test:8480/0247-0a41d2ded0dc-e84b446fd375-1000/index.html"
  }, {
    "publishDate" : "2018-07-27T07:17:50.597Z",
    "parentNodeId" : "4000-0a3603750a8b-042974932c13-2000",
    "nodeId" : "0247-0a41d2ded0df-bc264807a924-1000",
    "title" : "This is a test article with an image",
    "summary" : "This article is used to test PSN",
    "type" : "article",
    "path" : "/",
    "visible" : true,
    "url" : "http://www.site.test:8480/0247-0a41d2ded0df-bc264807a924-1000/index.html"
  } ],
  "updated" : [ ],
  "deleted" : [ ]
}

3.3.2. Custom Java Notifications

You can create you custom own Java handler to do whatever you want with the notification data event.

To do that you can simply create a new Maven project by using the provided archetype (com.eidosmedia.portal:archetype.psn.extensions).

This will create a project with an example class that show you how you can handle the notification event. It simply right all the data on the configured logger.

package com.example.mypsnext;

import java.text.SimpleDateFormat;
import java.util.List;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.eidosmedia.portal.psn.ExtendedPublishNodeNotification;
import com.eidosmedia.portal.psn.NotificationData;
import com.eidosmedia.portal.psn.endpoint.Endpoint;

public class MyEndpoint implements Endpoint {
    private static final Logger logger = LoggerFactory.getLogger(MyEndpoint.class);

    private static final SimpleDateFormat SDF = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssz");

    @Override
    public void sendNotificationData(NotificationData notificationData) {
        // created contents
        List<ExtendedPublishNodeNotification> created = notificationData.getCreated();
        if (created != null) {
            for (ExtendedPublishNodeNotification epnn : created) {
                logger.debug("A new {} has been CREATED for site {}!", epnn.getType(), notificationData.getSiteName());
                logger.debug("    id:      {}", epnn.getNodeId());
                logger.debug("    url:     {}", epnn.getUrl());
                logger.debug("    title:   {}", epnn.getTitle());
                logger.debug("    summary: {}", epnn.getSummary());
                logger.debug("    pubDate: {}", SDF.format(epnn.getPublishDate()));
            }
        }
        // updated contents
        List<ExtendedPublishNodeNotification> updated = notificationData.getUpdated();
        if (updated != null) {
            for (ExtendedPublishNodeNotification epnn : updated) {
                logger.debug("An existing {} has been UPDATED for site {}!", epnn.getType(),
                             notificationData.getSiteName());
                logger.debug("    id:      {}", epnn.getNodeId());
                logger.debug("    url:     {}", epnn.getUrl());
                logger.debug("    title:   {}", epnn.getTitle());
                logger.debug("    summary: {}", epnn.getSummary());
                logger.debug("    pubDate: {}", SDF.format(epnn.getPublishDate()));
            }
        }
        // deleted contents
        List<ExtendedPublishNodeNotification> deleted = notificationData.getDeleted();
        if (deleted != null) {
            for (ExtendedPublishNodeNotification epnn : deleted) {
                logger.debug("An existing {} has been DELETED for site {}!", epnn.getType(),
                             notificationData.getSiteName());
                logger.debug("    id:      {}", epnn.getNodeId());
                logger.debug("    title:   {}", epnn.getTitle());
                logger.debug("    summary: {}", epnn.getSummary());
                logger.debug("    pubDate: {}", SDF.format(epnn.getPublishDate()));
            }
        }
    }

}

As you can see, creating a new notification handler means to implement a very simple interface (com.eidosmedia.portal.psn.endpoint.Endpoint) with only one method (sendNotificationData(NotificationData)).

If needed, in those type of extensions you can inject the repository service and the discovery service.

Once you have built your jar file, you can deploy it under extra/lib/psn folder. This is not enough, because you have to add this new Endpoint to the PSN configuration file (conf/cobalt/psn.xml) as shown before. For example:

<Subscriber id="test" sites="*" types="article">
    <Endpoint em-type="com.example.psn.subscribers.custom" />
</Subscriber>

If your custom endpoint need to be configured from the XML file, you can implements the com.eidosmedia.portal.Configurable interface with one single methos (i.e. com.eidosmedia.portal.psn.endpoint.HttpEndpoint.loadConfiguration(HierarchicalConfiguration)). For example:

public class MyCustom implements Endpoint, Configurable {

    private String something;

    @Override
    public void loadConfiguration(HierarchicalConfiguration configuration)
        throws ConfigurationException {
        this.something = configuration.getString("[@something]");
    }

    // ...
}
Using SDK in custom Java Notifications

You can use Java client SDK inside PSN endpoints to fetch data from Cobalt services.

The following example shows how to retrieve URLs of main pictures of deleted articles:

public class MyEndpoint implements Endpoint {
    private static final Logger logger = LoggerFactory.getLogger(MyEndpoint.class);

    private SiteService siteService;

    @Inject
    public void setSiteService(SDK sdk) {
        this.siteService = sdk.getSDKService(SiteService.class);
    }

    @Override
    public void sendNotificationData(NotificationData notificationData) {
        List<ExtendedPublishNodeNotification> deleted = notificationData.getDeleted();
        if (deleted != null) {
            for (ExtendedPublishNodeNotification epnn : deleted) {
                //NodeData is available only for deleted nodes
                CobaltId picture = epnn.getNodeData().getPicture();
                String format = null;
                if (picture instanceof FileId) {
                    format = ((FileId) picture).getFormat();
                }
                if (picture instanceof NodeId) {
                    UrlHolder pictureUrl = siteService.getUrlsManager()
                            .evalResourceUrl(new SiteKey(notificationData.getSiteName(), ViewStatus.LIVE),
                                             (NodeId) picture, format, UrlIntent.HOST_RELATIVE);
                    logger.debug("Main picture url: {}", pictureUrl.getUrl());
                }
            }
        }
    }

}

You will need to add the sdk.web-base dependency with provided scope to the dependencies section of your project pom.xml:

<dependencies>
	...
	<dependency>
		<groupId>com.eidosmedia.portal</groupId>
		<artifactId>sdk.web-base</artifactId>
		<scope>provided</scope>
	</dependency>
</dependencies>

4. Themes

Cobalt is released with a single theme, called default, which serves only as a reference base for the development of custom themes and as a fallback theme in case a particular resource is not defined within the custom theme. It is not directly visible because it’s contained within the site module. In the cobalt distribution package we are releasing a copy of the default theme, called sample (it’s inside the src/themes folder). We also release a zip file containing an empty theme that can be used as a starting point for new themes development (the file is placed in src/themes/blank_theme.zip).

For this simple walkthrough, we’re going to use a freely availble Bootstrap template, downloadable from Start Bootstrap - Clean Blog. It’s a very basic theme (essentially oriented for a blog site), but it will be sufficient to show how to create a new Cobalt theme.

For semplicity, we suppose to create the theme on a developer PC, using Eclipse as IDE with the Cobalt SDK installed. Obviously you can do that from a different IDE, but in this case you have to manually copy your files in the correct folder of the Cobalt instance (your-cobalt-base-folder/src/themes/your-new-theme).

We also suppose to have already created a site named mysite which responds at http://www.mysite.test:8480.

4.1. Theme folder preparation

As mentioned before, we can bootstrap our new theme by using the blank_theme.zip provided within the Cobalt released package. You can find this zip file within the Servers > Cobalt Server 1.0 at localhost-config > src > themes folder.

theme-server-folder-structure
Figure 31. Cobalt Server folder structure in Eclipse with Cobalt SDK

Unzip this file directly within the themes folder and call that folder mytheme.

theme-server-folder-structure
Figure 32. mytheme folder inside the themes folder

Downnload the Start Bootstrap - Clean Blog template and extract its content inside the mytheme/assets folder.

theme-template-folder-structure
Figure 33. Start Bootstrap - Clean Blog template extracted inside the theme assets folder

Since you are developing a new theme, you don’t want to modify the real settings of your site. Anyhow you can tell to your local instance of the Site module to use this new theme when serving your requested site pages. To do that you have to place a file called site.properties inside your src/www/mysite folder (remember that mysite is the name of an existing site). In this file, place a property theme valued as mytheme (the name of your just created custom theme).

theme=mytheme

Please note that this theme overriding process is available only when you are in development mode, namely you have the proerty development.mode=true in your cobalt.properties file.

If at this point you’ll try to load the site from your browser, you’ll see a very bad result (the default theme mixed with the current theme’s assets css files). Don’t worry, it’s all under control.

4.2. Section page

We consider a section page as a simple list of articles, sorted by chronological order. Later we’ll see how it’s possible to model the contents of a section page, not as a flat list of articles, but as a sequence of lists of objects.

In the section’s model you have an array called children containing a sorted list of ids.

These ids can be used to retrieve the relative node info directly from within the section model in the nodes map.

The first thing you have to do is to identify the resource inside your asset folder that is going to represent this kind of page. The resource is the assets/index.html file.

Copy this file in the templates folder and call it section.ftl.

If at this point you’ll try to open your site root page, namely http://www.mysite.test:8480/, you’ll see the original index.html page rendered.

theme-clean-blog
Figure 34. The static home page served

Please note that the root of a site doesn’t have section type, instead it has site type. Anyhow, in this case the site root is rendered by using the section.ftl file (and not the site.ftl file) simply because in the default theme we have put this in the site.ftl template.

<@include template="section.ftl"/>

To change the title and the description of the page, you can substitute this HTML code in the section.ftl template:

    <!-- Page Header -->
    <header class="masthead" style="background-image: url('img/home-bg.jpg')">
      <div class="overlay"></div>
      <div class="container">
        <div class="row">
          <div class="col-lg-8 col-md-10 mx-auto">
            <div class="site-heading">
              <h1>Clean Blog</h1>
              <span class="subheading">A Blog Theme by Start Bootstrap</span>
            </div>
          </div>
        </div>
      </div>
    </header>

with this:

    <!-- Page Header -->
    <header class="masthead" style="background-image: url('img/home-bg.jpg')">
      <div class="overlay"></div>
      <div class="container">
        <div class="row">
          <div class="col-lg-8 col-md-10 mx-auto">
            <div class="site-heading">
              <h1>${currentObject.title}</h1>
              <span class="subheading">${currentObject.description}</span>
            </div>
          </div>
        </div>
      </div>
    </header>

We simply substitute the title and description text with the currentObject title and description value, provided by the page model.

The HTML block that represent a single article is this one.

          <div class="post-preview">
            <a href="post.html">
              <h2 class="post-title">
                Man must explore, and this is exploration at its greatest
              </h2>
              <h3 class="post-subtitle">
                Problems look mighty small from 150 miles up
              </h3>
            </a>
            <p class="post-meta">Posted by
              <a href="#">Start Bootstrap</a>
              on September 24, 2018</p>
          </div>
          <hr>

We can keep just one of this block and remove all the others. Then we can list all the children ids present in the model, and populate a block for each of them.

<#list model.children as childId>
	<#assign child=model.nodes[childId]>
          <div class="post-preview">
            <a href="${url(child)}">
              <h2 class="post-title">
                ${child.title!}
              </h2>
              <h3 class="post-subtitle">
                ${child.summary!}
              </h3>
            </a>
            <p class="post-meta">Posted by
              <a href="#">${child.authors[0]}</a>
              on ${child.pubInfo.publicationTime?date}</p>
          </div>
          <hr>
</#list>

The most interesting things here are the way we navigate the nodes list (through <#list> Freemarker directive), and the way we calculate the article url (by using a Freemarker method exposed by Cobalt, ${url(child)}).

theme-section
Figure 35. The home page with dynamic content rendered

4.3. Article page

The article page in our base template is represented by the assets/post.html file. As we have done for the section page, we have to copy this post.html file inside the templates folder and rename it as article.ftl.

Once we have done that, we can proceed to put Freemarker directive within the HTML code.

For the page header we can write the following code:

    <!-- Page Header -->
    <#assign bgimg=resourceUrl(currentObject.picture).default(theme.assetUrl('img/post-bg.jpg'))>
    <header class="masthead"
    	style="background-image: url('${bgimg}')">
      <div class="overlay"></div>
      <div class="container">
        <div class="row">
          <div class="col-lg-8 col-md-10 mx-auto">
            <div class="post-heading">
              <h1>${currentObject.title!}</h1>
              <h2 class="subheading">${currentObject.summary!}</h2>
              <span class="meta">Posted by
                <a href="#">${currentObject.authors[0]!}</a>
                on ${currentObject.pubInfo.publicationTime?date}</span>
            </div>
          </div>
        </div>
      </div>
    </header>

Please note that if at this point you’ll try to open the article page, you’ll get a very bad result, because the url of all the static resources (CSS and JS references) are wrongly interpreted. This because the references within the HTML are relative and the article url introduce some slashes. To fix this you have to modify all the resource imports, by placing a / in front of all the href/scr attributes. For example, this line:

    <script src="vendor/jquery/jquery.min.js"></script>

have to be fix in this way:

    <script src="/vendor/jquery/jquery.min.js"></script>

Now it’s time to render the content.

The content is contained within the currentObject.files.content.data of your underlying model. Since we are rendering the content of an article and its content is in HTML format, we could make the direct output of this field.

    <!-- Post Content -->
    <article>
      <div class="container">
        <div class="row">
          <div class="col-lg-8 col-md-10 mx-auto">
            ${currentObject.files.content.data}
          </div>
        </div>
      </div>
    </article>

Anyhow, you will probably need to transform this content, for example to clean up some useless blocks or to transform standard blocks into HTML code that conforms to your theme. To do this you can use the transform directive, which will apply an XSL transformation to the selected node.

As stated in the documentation, the article’s content is available also through currentObject.xmlContent property, which expose the XML parsed content of the article. By using this, you can easily select the node inside your content and pass it to the render function.

Our current article content is this one:

<document emk-type="story">
  <headgroup emk-id="RKX8cctJFwCjZ2e8g48N17O">
    <headline emk-id="">
      <p>My first article</p>
    </headline>
  </headgroup>
  <mediagroup>
    <figure emk-channel="Globe-Web,Cobalt-Blog1,AbTest,Globe-Print,Globe-Tablet,Globe-Mobile" emk-type="photo-normal">
      <img class="normal xsm-blockstyle-align-center" data-id="0247-0a203ee1f5e6-962a33938ae7-1000" emk-id="U85815164040xiY" height="450" src="/resources/0247-0a203ee1f5e6-962a33938ae7-1000/....jpeg" width="800">
    </figure>
  </mediagroup>
  <linkgroup></linkgroup>
  <summary emk-id="RsysEoqUKF4gqRm9KCXzDrJ">
    <p>This is my first article</p>
  </summary>
  <content emk-id="RL1YZJyW22zn2bQD26U4W3N">
    <byline>Ale Ber</byline>
    <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
  </content>
  <style></style>
</document>

Since we only need to output the document.content of the article we can pass only this xpath to the transform directive.

    <!-- Post Content -->
    <article>
      <div class="container">
        <div class="row">
          <div class="col-lg-8 col-md-10 mx-auto">
            <@transform content=currentObject.xmlContent.document.content />
          </div>
        </div>
      </div>
    </article>

If in the transform directive you do not specify an XSL transformation file (throught the xsl attribute), the src/themes/mytheme/default.xsl file will be used. If this file does not exist, the default theme default.xsl file will be used.

Since we don’t want to ouptut the byline content and we don’t need to have a content block around the outputted content, we can create a simple XSL as follow.

<xsl:stylesheet version="1.0"
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xalan="http://xml.apache.org/xalan"
    xmlns:cobalt="xalan://com.eidosmedia.portal.impl.xsl.CobaltElement"
    extension-element-prefixes="cobalt" exclude-result-prefixes="xalan">

    <xsl:output method="html" indent="yes"
        omit-xml-declaration="yes" xalan:indent-amount="4" />

    <xsl:template match="content">
        <xsl:apply-templates select="p" />
    </xsl:template>

</xsl:stylesheet>

Doing this, the transform directive will simply output that HTML fragment.

<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
theme-article-page
Figure 36. The rendered article page

4.4. Webpage page

A webpage can be seen as a container. This container is organized in different zones. A zone is itself a container and it could be organized in different sequences. A sequence is another container that can contains different elements.

webpage WP1
|- zone Z1
|  |- sequence S1
|  |  |- content C1
|  |  |- content C2
|  |  |- content CX
|  |- sequence S2
|  |  |- content C3
|  |  |- content CY
|- zone Z2
|  |- sequence S3
|  |  |- content C4

This structure can be reconstructed from the model of the page, mainly by using the defined links.

Anyhow, we developed Freemarker extensions that help you to navigate the webpage hierarchy.

<ul>
  <#list zones()?keys as zoneName>
    <#assign azone=zone(zoneName)>
    <li>
      zone ${zoneName}
      <ul>
        <#list azone.sequences as sequence>
          <li>
            sequence ${sequence?counter}
            <ul>
              <#list sequence.items as item>
                <li>${item.object.title}</li>
              </#list>
            </ul>
          </li>
        </#list>
      </ul>
    </li>
  </#list>
</ul>

You can get all the zones using the zones() method. This will return a map of the zones, where the key is the zone name. To navigate this map you can simply iterate its keys set (by using zones()?keys). Once you get the zone name, you can fetch the zone by using the zone method.

You can list all the zone sequences using a simple list directive. Finally you can iterate the sequence items.

Usually you’ll have zones with a single sequence. For this reason we give you the possibility to list items directly from their zone, without the need to iterate sequences.

<ul>
  <#list zones()?keys as zoneName>
    <#assign azone=zone(zoneName)>
    <li>
      zone ${zoneName}
      <ul>
        <#list azone.items as item>
          <li>${item.object.title}</li>
        </#list>
      </ul>
    </li>
  </#list>
</ul>
remember to don’t do that: <#assign zone=zone(zoneName)> because zone is also the method name. In the code snippet we used azone instead.

Moreover, you don’t want to put some complex logic to render an item directly within the section template (in the previous example we simply output the item title).

To avoid this you can use the include directive.

<ul>
  <#list zones()?keys as zoneName>
    <#assign azone=zone(zoneName)>
    <li>
      zone ${zoneName}
      <ul>
        <#list azone.items as item>
          <li>
            <@include object=item />
          </li>
        </#list>
      </ul>
    </li>
  </#list>
</ul>

Suppose you are going to include an article, then the template engine will try to use the src/mytheme/fragments/article.ftl fragment template. You can create this file, and inside it you can treat the included item as the new currentObject.

<a href="${url(currentObject)}">${currentObject.title}</a>
theme-final-folder
Figure 37. The final theme folder

5. Site Service Extensions

Cobalt has several extensions points for the Site Service. In the following sections, we provide extensive step by step examples for each.

5.1. Injectable Components

In the extensions point, many beans and managers of the Site Service can be injected via the standard JSR-330 annotation.

5.1.1. PortalRequest

It represents the request during the processing flow in the Site Service.

The class qualified name, including package, is:

com.eidosmedia.portal.dao.PortalRequest

It is filled with information during the request processing, such as the logged in user (accessible via the getUserData method), the identifiers of the page is going to be built (getContentDescriptor method).

5.1.2. UrlResolver

It allows to:

  • eval dynamic page or static resource file (e.g. images) URLs from references of objects in the Cobalt repository.

  • resolve objects reference from dynamic page or static resource file URLs

The class qualified name, including package, is:

com.eidosmedia.portal.url.UrlResolver

It contains several eval methods, depending on the parameters you use to eval the url, such as:

  • the desired view,

  • the desired page,

  • the UrlIntent, that can be HOST_RELATIVE, or ABSOLUTE (including the hostname and protocol parts)

  • the ResolutionType, that can be CONTENT to generate url of a dynamic page, or RESOURCE to generate url of a static resource

The eval methods require as input either the NodeId or the NodeData objects referencing an object in the Cobalt repository.

The resolveResourceUrl and resolveUrl resolve respectively the FileId of a specific static resource or an UrlResolverMatch. This latter class reference either a page, via its ContentDescriptor identifier, or redirection coordinates (an URL to redirect to and a status code).

public class UrlResolverMatch {

    private final ContentDescriptor contentDescriptor;

    private final int statusCode;

    private final URI urlTo;

    public ContentDescriptor getContentDescriptor() {
        return contentDescriptor;
    }

    public int getStatusCode() {
        return statusCode;
    }

    public URI getUrlTo() {
        return urlTo;
    }

    public UrlResolverMatch(ContentDescriptor contentDescriptor) {
        this.contentDescriptor = contentDescriptor;
        this.statusCode = 0;
        this.urlTo = null;
    }

    public UrlResolverMatch(int statusCode, URI urlTo) {
        this.contentDescriptor = null;
        this.statusCode = statusCode;
        this.urlTo = urlTo;
    }

}
UrlResolverHook

You can implement com.eidosmedia.portal.url.UrlResolverHook interface to hook right after a NodeData has been resolved by the UrlResolver.

To configure the hook you can add the implementation you want to use to the cobalt properties file:

site.url.hook=com.eidosmedia.portal.url.MyUrlResolverHook

Any additional property you define can be used to configure your hook class:

site.url.hook.myKey=myValue

By using the @Named annotation to refer it in the injection point. For example:

public class MyUrlResolverHook implements UrlResolverHook {

    private final String myKey;

    @Inject
    public MyUrlResolverHook(@Named("site.url.hook.myKey") String myKey) {
        this.myKey = myKey;
    }

    //execute method implementation follows
}

You can use this kind of hook to redirect to another path returning a UrlResolverMatch with a redirect status code:

@Override
public UrlResolverMatch execute(SiteKey siteKey,
                                UrlResolverMatch match,
                                NodeData nodeData,
                                PortalRequest portalRequest) {
    //some other code here
    //...
    return new UrlResolverMatch(301, URI.create("/"));
}

or redirect to another node returning a UrlResolverMatch with a ContentDescriptor:

@Override
public UrlResolverMatch execute(SiteKey siteKey,
                                UrlResolverMatch match,
                                NodeData nodeData,
                                PortalRequest portalRequest) {
    //some other code here
    //...
    return new UrlResolverMatch(new ContentDescriptor(aNodeId));
}

You can use the hook also to add HTTP response headers through the PortalRequest#addResponseHeader(String, String…​) method:

@Override
public UrlResolverMatch execute(SiteKey siteKey,
                                UrlResolverMatch match,
                                NodeData nodeData,
                                PortalRequest portalRequest) {
    //some other code here
    //...
    portalRequest.addResponseHeader("myHeader", "myValue");
    //returning the same match
    return match;
}

You can even throw a DataUnavailableException implementation with an error status code to send in the response:

@Override
public UrlResolverMatch execute(SiteKey siteKey,
                                UrlResolverMatch match,
                                NodeData nodeData,
                                PortalRequest portalRequest) {
    //some other code here
    //...
    throw new DataNotFoundException();
}

5.1.3. DataManager

It is the main access point for direct access objects in the Cobalt repository and full-text queries.

The class qualified name, including package, is:

com.eidosmedia.portal.dao.DataManager
Retrieving nodes by Id
NodeData getNodeData(SiteKey siteKey, NodeId nodeId, String[] aggregators, PortalRequest portalRequest) throws DataException;
DataResult<NodeData> getNodesData(SiteKey siteKey, List<NodeId> nodeIds, String[] aggregators, PortalRequest portalRequest) throws DataException;

where the aggregators parameter can be used to provide an array of string constants among the ones available in the class:

com.eidosmedia.portal.dao.Aggregators
Querying nodes in a specific section
PaginatedResult<NodeId> getNodeChildren(SiteKey siteKey, NodeData nodeData, Params params, PortalRequest portalRequest) throws DataException;
PaginatedResult<NodeId> getNodeChildren(SiteKey siteKey, String sectionPath, Params params, PortalRequest portalRequest) throws DataException;

where nodeData,or sectionPath, are the section of which we want to fetch children nodes and params is an object of type com.eidosmedia.portal.dao.Params, that allows to define filtering options.

Querying by foreign source id

You can convert a reference of the foreign system (the system you published the content from, e.g. the Methode editorial) to a reference on cobalt.

NodeId getNodeIdByForeignId(SiteKey siteKey, String foreignId, PortalRequest portalRequest) throws DataException

5.1.4. ContentDataManager

It allows to retrieve an aggregation of nodes. This aggregation is part of the model used by the template engine and it is cached in a persistent way.

The class qualified name, including package, is:

com.eidosmedia.portal.content.ContentDataManager

The main method is:

ContentData buildContentData(SiteKey siteKey, AggregationKey aggregationKey, PortalRequest portalRequest) throws Exception;

The aggregation key is the identifier for an aggregated object. Its id field is the EntityId ( usually its subclass NodeId for database NodeData objects ) of the main data node, the one driving the aggregation:

public AggregationKey(EntityId id,
                        OutputMode outputMode,
                        String[] aggregators,
                        int pageNumber,
                        String permissionVariant)

5.1.5. SitesManager

It retrieves information about sites, such as site structure, sections, permalinks.

The class qualified name, including package, is:

com.eidosmedia.portal.site.SitesManager

Many of its methods return com.eidosmedia.portal.site.SiteNode objects.

Contains utilities to retrieve:

  • the root of a website given the site identifier

  • a site map given the site identifier

  • hostname of a website given the site identifier

  • the permalink template set on a website

5.2. SiteRequestHandler

The SiteRequestHandler interface allows you to define virtual pages responding to specific paths of your websites.

Virtual pages can fetch data from external sources and are cached with the standard cobalt caching mechanisms.

public interface SiteRequestHandler {

    String getId();

    default String getDescription() {
        return null;
    }

    boolean match(AggregationKey aggregationKey);

    UrlResolverMatch resolveUrl(SiteKey siteKey,
                                String relativePath,
                                Map<String, String[]> queryParams,
                                PortalRequest portalRequest) throws DataException;

    default List<String> evalUrls(SiteKey siteKey,
                                  CobaltId id,
                                  PortalRequest portalRequest) {
        return null;
    }

    AbstractData evalRootEntity(SiteKey siteKey,
                                AggregationKey aggregationKey,
                                PortalRequest portalRequest) throws DataException;

    ContentData enrichContentData(SiteKey siteKey,
                                  ContentData contentData,
                                  AggregationKey aggregationKey,
                                  PortalRequest portalRequest) throws DataException;

	default boolean isSiteConfigurable() {
        return false;
    }

    default boolean canBeDisabled() {
        return false;
    }

    default Set<SiteRequestHandlerParameter> getParameters(SiteKey siteKey) {
        return Collections.emptySet();
    }

}

The methods "getDescription", "isSiteConfigurable", "canBeDisabled", "getParameters" can be overridden to provide the information related to the SiteRequestHandler.

5.2.1. Example: Videos Fragment

Videos fragment is an example of SiteRequestHandler that fetches a video playlist from an external provider and produces a ContentData with a custom data model.

resolveUrl method

The resolveUrl method is used to check if a website relative path matches this handler:

package com.eidosmedia.cobalt.extensions;

// imports here

public static String PREFIX = "/_ssi/fragments/videos";

@Override
public UrlResolverMatch resolveUrl(SiteKey siteKey, String relativePath, Map<String, String[]> params, PortalRequest portalRequest) throws DataException {
    // We define the rules for the URI to match our content
    if (relativePath.startsWith(PREFIX)) {
        int page = 0;
        if (relativePath.startsWith(PREFIX + "/")) {
            String suffix = relativePath.substring(PREFIX.length() + 1);
            try {
                page = Integer.parseInt(suffix);
            } catch (NumberFormatException ex) {
                LOGGER.error("Not a number suffix: {}", suffix);
                return null;
            }
        }

        // We create a new id for our data "videos"
        SiteRequestId id = new SiteRequestId("videos");

        // We create a new ContentDescriptor with outputMode FRAGMENT
        ContentDescriptor contentDescriptor = new ContentDescriptor(id,
                                                                    OutputMode.FRAGMENT,
                                                                    null,
                                                                    page,
                                                                    null);

        // We return the UrlResolverMatch with embedded content descriptor
        UrlResolverMatch m = new UrlResolverMatch(contentDescriptor);
        return m;
    }
    return null;
}

The method returns a valid UrlResolverMatch if the request url starts with the PREFIX constant, otherwise returns null.

The UrlResolverMatch is the same bean used by the Cobalt’s UrlResolver. It must return either a valid ContentDescriptor or redirect URL and status code information.

The ContentDescriptor must refer to a virtual id of type SiteRequestId. The developers must verify that unique ids are used.

public ContentDescriptor(EntityId id, OutputMode outputMode, String view, int pageNumber, String permissionVariant)

The VideosFragment produces a ContentDescriptor with OutputMode.FRAGMENT. This virtual content is supposed to be rendered as a fragment included in a page. The pageNumber is used to split the content in separate parts (identified by the ContentDescriptor), each one containing only a subset of the videos.

match method

Once the handler matches a request, its processing is delegated to a different component using the AggregationKey as identifier (containing a superset of the fields of the ContentDescriptor). For this reason a subsequent check is performed through the match method:

@Override
public boolean match(AggregationKey aggregationKey) {
    return aggregationKey.getId().equals(new SiteRequestId("videos"));
}
The implementation of the match interface method is always similar to the one above, a simple check on the id field.
evalRootEntity method

The evalRootEntity method builds the root data object that must extend the AbstractData class. Usually the CustomData concrete class is used:

@Override
public AbstractData evalRootEntity(SiteKey siteKey, AggregationKey aggregationKey, PortalRequest portalRequest) throws DataException {
    // We create a new data object with the SiteRequestId created before as ID
    CustomData d = new CustomData(aggregationKey.getId(), new Type("videos"));

    // We set 60 seconds of time to live. This way both the ContentData object built on top of
    // this CustomData, will last for 60 seconds in cache.
    d.setCacheTTLSeconds(60);

    // We retrieve our videos from dailymotion and put them in the data object
    try {
        Videos videos = client.target("https://api.dailymotion.com/playlist")
                .path("x4dmd3")
                .path("videos")
                .queryParam("fields", "id,title,thumbnail_240_url,embed_url,embed_html")
                .queryParam("page", aggregationKey.getPageNumber() + 1).request().get(Videos.class);

        d.put("videos", videos);
    } catch (Exception ex) {
        logger.error("evalRootEntity - error contacting the external service", ex);
		throw new DataException("Unable to build the content", ex);
    }

    return d;
}

A new Type is provided as parameter to constructor of the CustomData. This type will drive the template fetched to dress the content.

Furthermore it implements the two interfaces CacheConfigurable and PortalResource:

public interface CacheConfigurable {

    Long getCacheTTLSeconds();
}

public interface PortalResource  {

    CobaltId getId();

    long getLastModified();

    String getEtag();

    long getContentLength();

    String getContentType();

    String getCharacterEncoding();

    Locale getLocale();

    String getDescription();

}

The method getCacheTTLSeconds determines the amount of time, in seconds, the ContentData, built on top of the root object (in our case, the CustomData), stays in cache.

The getEtag and getLastModified methods build, respectively, the etag and last modified header for the browser, when the Client Side Cache is enabled in the Site Service. This way the server can avoid resending data to the client when it is not necessary.

In CustomData, the getEtag and getLastModified method implementations return, respectively, a version string and a timestamp that can be set with the constructor:

public CustomData(CobaltId id, Type type, String version, Date timestamp)

The CustomData class implements the Map interface and is backed by an internal map field. The data stored in this map is visible in the json api responses as well as in the model for the template engine.

We put the result of an API call to an external video provider in the videos field of the map (See Jersey Client for details on how to inject the client to perform web service requests). These Videos and Video POJOs are used to map the response:

public static final class Videos implements Serializable {
    private static final long serialVersionUID = 1L;

    private List<Video> list;
    private Boolean has_more;
    private Integer page;

    public Boolean getHas_more() {
        return has_more;
    }

    public List<Video> getList() {
        return list;
    }

    public Integer getPage() {
        return page;
    }
}

public static final class Video implements Serializable {
	private static final long serialVersionUID = 1L;

    private String id;
    private String title;
    private String thumbnail_240_url;
    private String embed_url;
    private String embed_html;

    public String getId() {
        return id;
    }

    public String getTitle() {
        return title;
    }

    public String getThumbnail_240_url() {
        return thumbnail_240_url;
    }

    public String getEmbed_url() {
        return embed_url;
    }

    public String getEmbed_html() {
        return embed_html;
    }
}
All data must subclass AbstractData and therefore it is mandatory to implement Serializable. If using CustomData, any object put in the map must itself implement Serializable.

Be sure to throw a new DataException if the evalRootEntity code does not build a valid AbstractData object. Otherwise, whether the data object you return is valid or not, it could be put in cache.

You could also extend its abstract subclass DataUnavailableException (or use one of its already existing subclasses, e.g. DataNotFoundException) to set the specific HTTP status code for the error.

public abstract class DataUnavailableException extends DataException {
    private static final long serialVersionUID = 1L;

    private final int statusCode;

    public DataUnavailableException(int statusCode) {
        super();
        this.statusCode = statusCode;
    }

    public DataUnavailableException(String message, int statusCode, Throwable cause) {
        super(message, cause);
        this.statusCode = statusCode;
    }

    public DataUnavailableException(String message, int statusCode) {
        super(message);
        this.statusCode =  statusCode;
    }

    public DataUnavailableException(Throwable cause, int statusCode) {
        super(cause);
        this.statusCode = statusCode;
    }

    public int getStatusCode() {
        return statusCode;
    }
}
enrichContentData method

The enrichContentData method is used to aggregate additional NodeData or AbstractData instances in the nodes map. In this example we don’t need it:

@Override
public ContentData enrichContentData(SiteKey siteKey, ContentData contentData, AggregationKey aggregationKey, PortalRequest portalRequest)
        throws DataException {
    return contentData;
}
Remember to return the contentData, even if not modified.
If NodeId references are put in fields unknown by the system, and some of such NodeData objects are not visible due to embargo, they will be filtered out, but their references will remain dangling in the ContentData. You have to handle it by yourself in your ftl templates.
Registering the SiteRequestHandler

With the SiteRequestHandler implemented, we need to register it in the cobalt.properties configuration file:

site.requestHandler[0].em-type=com.eidosmedia.cobalt.extensions.VideosFragment

You need to restart the Cobalt server for the change to take effect.

You could also parametrize your SiteRequestHandler configuration. For example, suppose you want to add a dynamic path parameter instead of using the PREFIX constant:

site.requestHandler[0].em-type=com.eidosmedia.cobalt.extensions.VideosFragment
site.requestHandler[0].path=/my-path

In order to read this parameter from the Java code, you need to create a setter method, and a corresponding field, in your class:

private String path;

public void setPath(String path) {
    this.path = path;
}

Be sure that the name after the set prefix of the setter method, is the same as the one in the XML configuration, but capitalized:

  • path as the property in the cobalt.properties file,

  • setPath as the java setter method.

The same configuration can be provided to the docker container as environment variable. For example in docker-compose syntax:

environment:
    - "site.requestHandler[0].em-type=com.eidosmedia.cobalt.extensions.VideosFragment"
JSON Response example

Once the SiteRequestHandler is deployed, it is reachable at /_ssi/fragments/videos, however that endpoint will now return:

no-template
Figure 38. No template found

because no template for the requested type can be found.

If you provide the debug parameter /_ssi/fragments/videos?emk.outputMode=RAW (works only with development.mode=true in the cobalt.properties configuration file, or if your host is in the trusted network e.g. common.defaults.trusted.hosts=127\\.\\d+\\.\\d+\\.\\d+|::1|0:0:0:0:0:0:0:1), you can inspect the data model:

Debug Response
{
  "model" : {
    "id" : "model://req:videos/RAW/1/null",
    "data" : {
      "dataType" : "custom",
      "videos" : {
        "list" : [ {
          "id" : "x6d3wq8",
          "title" : "Liverpool vs Chelsea 1-0 - UCL 2004/2005 - Goal & Full Highlights",
          "thumbnail_240_url" : "http://s1.dmcdn.net/o6hRW/x240-Ge8.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6d3wq8",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6d3wq8\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6d5uua",
          "title" : "AC Milan vs Liverpool 2-1 - UCL Final 2007 - HD 1080i Full Highlights",
          "thumbnail_240_url" : "http://s1.dmcdn.net/o79Iv/x240-6nI.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6d5uua",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6d5uua\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6cy8y7",
          "title" : "Chelsea vs Paris Saint Germain 1-2 - UCL 2015/2016 (2nd Leg) Highlights (English Commentary) HD",
          "thumbnail_240_url" : "http://s1.dmcdn.net/o4WU8/x240-ZLV.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6cy8y7",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6cy8y7\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6cy7ou",
          "title" : "Barcelona vs Roma 6-1 - UCL 2015/2016 Group Stage Highlights HD",
          "thumbnail_240_url" : "http://s2.dmcdn.net/o4Kll/x240-fX2.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6cy7ou",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6cy7ou\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6cuzbu",
          "title" : "Barcelona vs Chelsea 5-1 (a.e.t.) - UCL 1999/2000 1/4 Final (2nd Leg) Highlights",
          "thumbnail_240_url" : "http://s2.dmcdn.net/o1Y65/x240-h0C.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6cuzbu",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6cuzbu\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6cffog",
          "title" : "Barcelona vs Chelsea 2-1 - UCL 2004/2005 (1st Leg) - Full Highlights",
          "thumbnail_240_url" : "http://s2.dmcdn.net/oxJG8/x240-NZs.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6cffog",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6cffog\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6bt1iz",
          "title" : "Real Madrid vs AC Milan 2-3 - UCL 2009/2010 - Full Highlights (English Commentary)",
          "thumbnail_240_url" : "http://s2.dmcdn.net/onw14/x240-uxB.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6bt1iz",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6bt1iz\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x6bt0l9",
          "title" : "FC Barcelona vs Chelsea 1-1 - UCL 2005/2006 2nd Leg - Full Highlights",
          "thumbnail_240_url" : "http://s2.dmcdn.net/onu1H/x240-sd4.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x6bt0l9",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x6bt0l9\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x67dpeq",
          "title" : "Liverpool vs Maribor 3-0 Extended Highlights HD 2017",
          "thumbnail_240_url" : "http://s2.dmcdn.net/nvKBa/x240-oGl.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x67dpeq",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x67dpeq\" allowfullscreen allow=\"autoplay\"></iframe>"
        }, {
          "id" : "x67b8tj",
          "title" : "Napoli vs Manchester City 2-4 Extended Highlights HD 2017",
          "thumbnail_240_url" : "http://s1.dmcdn.net/nvKqZ/x240-gLq.jpg",
          "embed_url" : "http://www.dailymotion.com/embed/video/x67b8tj",
          "embed_html" : "<iframe frameborder=\"0\" width=\"480\" height=\"270\" src=\"//www.dailymotion.com/embed/video/x67b8tj\" allowfullscreen allow=\"autoplay\"></iframe>"
        } ],
        "has_more" : true,
        "page" : 2
      },
      "id" : "req:videos",
      "type" : "videos",
      "baseType" : "videos"
    },
    "dataType" : "custom",
    "nodes" : { },
    "defaultContent" : false,
    "lastModified" : 1530802111901,
    "etag" : "fbf017eb/1530802111901",
    "contentLength" : 0,
    "children" : [ ],
    "totalPages" : 0,
    "outputMode" : "RAW",
    "page" : 1,
    "aggregators" : [ ]
  }
  ... //other fields
}

In order to render the data model, you will have to create the videos template file and put it in the fragments folder of your theme (default theme in the example) of your server. In case you are using the Eclipse SDK:

template-creation
Figure 39. Videos template creation in Eclipse
<div class="videos" />
    <span>Videos</span>
    <#list model.data.videos.list as video>
    <div class="media side-link" >
        <div class="media-left">
          <a href="c">
             <img height="100" width="100" class="media-object" src="${video.thumbnail_240_url!''}" />
          </a>
      </div>
      <div class="media-body">
          <a href="${video.embed_url}">
            <h4 class="media-heading">${video.title}</h4>
          </a>
      </div>
    </div>
    </#list>
</div>

if you navigate again to the url _ssi/fragments/videos it returns:

rendered-page
Figure 40. Rendered page

Since this is supposed to be a fragment you can include it server side, in another template. For example with Freemarker:

<@server_include uri="/_ssi/fragments/videos"/>
ttemplate-server-inclusion
Figure 41. Server side inclusion in section template

when you use server side inclusion of internal content (content generate by cobalt), in the end the html cache (e.g. your cdn) will be unaware that the page has been built from two different pieces, unless you use a mechanism such as ESI, supported partially by Varnish and Akamai

In this case remember to enable ESI on the Site service cobalt.properties with site.templateEngine.esiEnabled=true

5.2.2. Adding cookies to the response

If you need to add cookies to the response you can leverage the PortalRequest object:

@Override
public AbstractData evalRootEntity(SiteKey siteKey, AggregationKey aggregationKey, PortalRequest portalRequest)
    throws DataException {
    ...
    CobaltCookie cookie = new CobaltCookie("my-cookie-key", "my-cookie-value");
    //you can set specific cookie properties
    cookie.setHttpOnly(true);
    portalRequest.getCookies().addCookie(cookie);
    ...
}

Add your cookie in the evalRootEntity method, that is called on every request, even if the resulting data model is cached.

5.3. Extending REST APIs

If you need to get output from Site Service API in a different format, or you want to add your custom logic, without adding network overhead, you can add custom API endpoints directly on the Site Service.

Cobalt recognizes endpoints created through the Java API for RESTful Web Services (JAX-RS, defined in JSR 311).

5.3.1. Configuring the Site Service to lookup new JAX-RS Resources

You will have to add to the cobalt.properties file the comma separated list of packages to be scanned in search of new endpoint classes:

common.resource.scan.basePackages=my.scan.package1,my.scan.package2

You need to restart the Cobalt server for the change to take effect.

5.3.2. Example: CustomResource

The JAX-RS resource CustomResource, given the id of a NodeData in Cobalt, retrieves the it, and apply some logic to customize the output, before sending the final response.

CustomResource
package com.eidosmedia.cobalt.extensions;

//imports here

@Path("custom")
@Tag(name = "custom", description = "Custom resource to retrieve a customized view of the articles")
@Produces(MediaType.APPLICATION_JSON)
public class CustomResource extends SiteAwareResource {

	private DataManager dataManager;

	@Inject
	public void setDataManager(DataManager dataManager) {
		this.dataManager = dataManager;
	}

	public static class Custom {
		private String title;
		private Set<String> author;
		private Serializable content;
		public String getTitle() {
			return title;
		}
		public Set<String> getAuthor() {
			return author;
		}
		public Serializable getContent() {
			return content;
		}
		public Custom(String title, Set<String> author, Serializable content) {
			super();
			this.title = title;
			this.author = author;
			this.content = content;
		}
	}

	@GET
	@Path("{id}")
	public Custom getStory(@PathParam("id") @Parameter(description="The id of the content") NodeId nodeId) {
		try {
			NodeData node = dataManager.getNodeData(siteKey, nodeId, null, portalRequest);
			return new Custom(node.getTitle(), node.getAuthors(), node.getContent() );
		} catch (DataNotFoundException ex) {
			throw new PortalWebEndpointException(ex.getMessage(), ex.getStatusCode(), ErrorEntityType.ENTITY_NOT_FOUND);
		} catch (Exception ex) {
			throw new PortalWebEndpointException("Unexpected error", Response.Status.INTERNAL_SERVER_ERROR.getStatusCode(), ErrorEntityType.ERROR);
		}
	}
}

The package of the CustomResource must be configured among the scan packages:

common.resource.scan.basePackages=com.eidosmedia.cobalt.extensions

The new REST API answers to the path /custom/{id} as defined by the @Path annotations above the CustomResource class and the getStory method.

Whenever a new HTTP request is made to the API a new instance of CustomResource is automatically created by JAX-RS, which then deserializes the id path parameter into the NodeId instance defined in the getStory method signature and invokes the method.

Extending SiteAwareResource

By extending the parent class SiteAwareResource, a siteKey field, containing the identifier of the site the request is made to, is populated in the class instance.

Remember that the Site service is able to serve multiple web sites, distinguished by their hostnames.
DataManager interface and getNodeData method

The DataManager dependency is injected in the CustomResource instance:

Injecting the DataManager
private DataManager dataManager;

@Inject
public void setDataManager(DataManager dataManager) {
    this.dataManager = dataManager;
}
DataManager is the high level access point to the Cobalt repository. It contains methods to lookup NodeData objects by id, search for objects by their fields or sections and search via full-text queries.

The getNodeData method fetches the NodeData object, identified by NodeId and SiteKey:

NodeData getNodeData(SiteKey siteKey, NodeId nodeId, String[] aggregators, PortalRequest portalRequest) throws DataException

Available aggregator constants are defined in the class com.eidosmedia.portal.dao.Aggregators, and define additional behaviors, when retrieving the NodeData objects.

JSON Response

The API response is the JSON serialized version of the Custom bean, built from the retrieved NodeData, to show only the title, the authors and the content of the node.

{
  "title" : "History's 10 greatest accidental inventors",
  "author" : [ "Matteo Silvestri" ],
  "content" : "<?xml version=\"1.0\" encoding=\"UTF-8\"?><document emk-type=\"story\">...</document>"
}
Handling errors

Errors should be handled throwing com.eidosmedia.portal.api.exceptions.PortalWebEndpointException.

public PortalWebEndpointException(String message, int status, ErrorEntityType error)

status is the HTTP status code for the error, error is a constant describing the error kind.

When serialized, the exception result in a response similar to the following one:

{
  "error" : {
    "message" : "Node with id 0243-094f20b24420-caf9910ef78f-1001 not found in repository",
    "type" : "ENTITY_NOT_FOUND"
  }
}

5.4. Extending the TemplateEngine

The template engine can be extendend by adding your custom functions. From any custom function class, you will be able to inject and use SiteService components as explained in the dependency injection section.

As of version 3.2022.03 of the Cobalt distribution, the default template engine implementation is FreeMarker, an open source Apache project, widely used and well documented.

In FreeMarker, you can implement both the TemplateDirectiveModel and the TemplateMethodModel or TemplateMethodModelEx. Whether to implement a directive or a method depend on the specific needs; you can find a detailed explanation on FreeMarker documentation.

5.4.1. Example: HelloWorldDirective

This example custom directive HelloWorldDirective simply prints a message containing a greeting to the logged in user to the output stream.

package com.eidosmedia.cobalt.extensions;

// imports here

public class HelloWorldDirective implements TemplateDirectiveModel {

	//Injecting the current request proxy
	@Inject
	private PortalRequest pr;

	@Override
	public void execute(Environment env, @SuppressWarnings("rawtypes") Map params, TemplateModel[] loopVars, TemplateDirectiveBody body)
			throws TemplateException, IOException {
		//getting the output writer
		Writer o = env.getOut();
		//if is there a user logged in
		if (pr!=null && pr.getUser() != null) {
			o.write("Hello ");
			o.write(pr.getUser().getAlias());
		} else {
			o.write("No user logged in");
		}
	}

}

The injected PortalRequest contains information on the current request, similarly to the HttpServletRequest, but including also information specific to Cobalt, such as the current user:

UserData getUser();

The directive must be registered in the cobalt.properties:

site.templateEngine.function[0].class=com.eidosmedia.cobalt.extensions.HelloWorldDirective
site.templateEngine.function[0].key=hello

After restarting Cobalt, the directive can be used in FreeMarker templates:

<@hello />

The same configuration can be provided to the docker container as environment variable. For example in docker-compose syntax:

environment:
    - "site.templateEngine.function[0].class=com.eidosmedia.cobalt.extensions.HelloWorldDirective"
    - "site.templateEngine.function[0].key=hello"

6. Sanitizing URLs

The UrlSanitizer interface implementations intercept each incoming http request and allow to redirect the request to a new url applying custom sanitization logic:

/**
 * Sanitize request if needed.
 */
public interface UrlSanitizer {

    /**
     * Return a sanitized redirect URI and status code if needed.
     *
     * @param request
     * @return the sanitized redirect or <code>null</code> if there is no need to sanitize the URI.
     */
    Redirect sanitize(RequestURI request);
}

The sanitize method receives a RequestURI instance, containing all data related to the current request URI:

public class RequestURI {

    ...

    public String getScheme() {
        return scheme;
    }

    public String getHostname() {
        return hostname;
    }

    public int getPort() {
        return port;
    }

    public String getPath() {
        return path;
    }

    public Map<String, String[]> getQueryParameters() {
        return queryParameters;
    }

    public String getQueryString() {
        return queryString;
    }

    ...
}

It must return a Redirect object if the request must be redirected to a sanitized URI or null otherwise:

public class Redirect {

    private final int statusCode;
    private final String uri;

    public Redirect(int statusCode, String uri) {
        this.statusCode = statusCode;
        this.uri = uri;
    }

    public Redirect(String uri) {
        this.statusCode = 301;
        this.uri = uri;
    }

    public int getStatusCode() {
        return statusCode;
    }

    public String getUri() {
        return uri;
    }

}

Status code 301 Moved Permanently is used as default, or you can provide your 3XX code in the constructor.

6.1. Configuration

In order to register a UrlSanitizer in the Site Service configuration, you just need to add the line:

site.url.sanitizer=com.eidosmedia.portal.url.sanitizer.PrettyUrlSanitizer

to the cobalt.properties file providing the fully qualified name of your implementation.

The same configuration can be provided to the docker container as environment variable. For example in docker-compose syntax:

environment:
    - "site.url.sanitizer=com.eidosmedia.portal.url.sanitizer.PrettyUrlSanitizer"

6.2. Example: PrettyUrlSanitizer

The following example implementation, available in the Cobalt package, just replaces repeated slashes and makes the URI lowercase:

public class PrettyUrlSanitizer implements UrlSanitizer {

    @Override
    public Redirect sanitize(RequestURI request) {
        String path = request.getPath()
                .replaceAll("/{2,}", Matcher.quoteReplacement("/"))
                .toLowerCase();
        String hostname = request.getHostname();
        if (!hostname.startsWith("www.")) {
            hostname = "www." + hostname;
        }

        RequestURI newURI = new RequestURI(request.getScheme(), hostname, request.getPort(), path,
                                           request.getQueryParameters(), request.getQueryString());
        if (!newURI.equals(request)) {
            return new Redirect(newURI.toString());
        } else {
            return null;
        }
    }

}

7. Directory Service Extensions

You can extend the directory service adding custom connectors and/or actions.

7.1. Custom Connector Development

A connector allow you to configure links to external accounts providers in use. Then, Cobalt will automatically generate a placeholder for this external user inside its repository, and from now on this user will be handled like standard users.

The easiest way to create a new connector is by using the provided archetype, eventually you can change the default parameters during the wizard. This process will create:

  • a Connector skeleton class

  • a ConnectorResource skeleton class

  • a ConnectorData skeleton class

7.1.1. Connector

The Connector is the main class and it’s the place where you have to implement your business logic.

package com.example.cobalt.directory_ext;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.eidosmedia.portal.directory.webresources.connectors.ConnectionException;
import com.eidosmedia.portal.directory.webresources.connectors.Connector;
import com.eidosmedia.portal.directory.webresources.connectors.ConnectorUser;

public class GlobeConnector extends Connector<GlobeConnectorData> {
    private static final Logger logger = LoggerFactory.getLogger(GlobeConnector.class);

    /*
     * You can create setters to inject fields directly from the configuration xml.
     *
     * For example if you have:
     *
     * <Connector em-type="com.example.cobalt.directory_ext.GlobeConnector" length="10" />
     *
     * you can create the length field and setter:
     *
     * private int length;
     *
     * public void setLength(int length) {
     *     this.length = length;
     * }
     *
     * If you need to have more control on the configuration phase, you can implements the
     * com.eidosmedia.portal.Configurable interface.
     *
     * If you need to do something during the connector initialization (and/or shutdown) phase,
     * you can implements com.eidosmedia.portal.Disposable interface.
     *
     * IMPORTANT
     * The connector resource must be placed on the same pachage and called <ConnectorName>Resouce.
     * For example: com.example.GlobeConnector --> com.example.GlobeConnectorResource
     */

    @Override
    public ConnectorUser getUser(GlobeConnectorData data) throws ConnectionException {
        logger.debug("getUser - method enter with data={}", data);
        ConnectorUser connectorUser = null;
        // TODO implement your business logic to create the connector user
        logger.debug("getUser - method leave connectorUser={}", connectorUser);
        return connectorUser;
    }

}

As you can see there is only one method, getUser(). This method takes as input a bean containing the information received in the connector login method, and outputs an instance of ConnectorUser.

7.1.2. Connector Resource

To avoid re-implementing some of the methods provided in the abstract connector class, there are a few tricks to follow:

  • this class must be kept in the same package as your connector

  • the class name must be the same as your connector, with the addition of a prefix Connector (e.g. GlobeConnector will want GlobeConnectorResource).

  • the path must be defined with the prefix ConnectorResourcePaths.CONNECTORS, this mechanism will be used to automatically define the key

package com.example.cobalt.directory_ext;

import javax.ws.rs.Path;

import com.eidosmedia.portal.directory.api.paths.ConnectorResourcePaths;
import com.eidosmedia.portal.directory.webresources.connectors.ConnectorResource;

@Path(ConnectorResourcePaths.CONNECTORS + "/globe")
public class GlobeConnectorResource extends ConnectorResource<GlobeConnectorData> {

}

In this class you can eventually add additional REST methods to expose.

7.1.3. Connector Data

This is the bean class that is used to transfer JSON body data sent with the connector’s login method to the Connector getUser() implementation.

it must have an empty constructor and a getter/setter for all the properties you need to expose/automatically map.
Configuration

To use the new connector you have to configure your instance of the Directory service, the files to change are the following:

  • directory.xml

  • context.xml

Into the directory.xml locate the section Connectors if you cannot find this section feel free to create one. The configuration is the similar to the following:

<DirectoryService>
...
    <Connectors>
        <Connector
                em-type="com.eidosmedia.cobalt.connector.GlobeConnector"
                property="value" property2="value2" />
    </Connectors>
...
</DirectoryService>

In order to make you Tomcat aware of the new connector, after the export of the compiled code into a JAR file (eg: globeConnector.jar) you have to edit the context.xml (the EXPORT_FOLDER placeholder must point to the folder where you have exported the JAR file):

<Context>
...
    <Resources className="org.apache.catalina.webresources.StandardRoot">
        <PreResources className="org.apache.catalina.webresources.DirResourceSet"
            base="/EXPORT_FOLDER/lib" internalPath="/" webAppMount="/WEB-INF/lib" />
    </Resources>
...
</Context>

The use of PreResource is documented at this page: https://tomcat.apache.org/tomcat-8.0-doc/config/resources.html

remember to copy in the PreResource lib folder also all the dependencies of your code.

7.2. Action

An action allow you to configure a custom operation that will be executed after the call of some directory APIs (at the moment the supported APIs are: users/create, users/register, users/update). An action for example can be a called after an update to align an external database with the updated user.

7.2.1. Example: Local Action

The class (eg: GlobeAction) handle the business logic that will applied after Cobalt API execution. Contains an implementation of method getKey already filled by the archetype.

In this class you can put some properties that will be evaluated during the reading of the configuration file through the relative setters.

In this class you have to implement the method execute(UserData data, HttpServletRequest request) writing the logic to execute after API (eg: calling an API or a web service). The data object must be an object that extends the AuditData class so you can create actions that uses the following implementations:

  • UserData

  • GroupData

  • SessionData

  • AvatarData

The execute method returns a boolean that will be set to true if the action was executed correctly otherwise false will be returned in case of failure.

In case of failure the action will be stored and will be executed in a second time (see the Configuration section for more information).

package com.eidosmedia.portal.directory.webresources.actions;

import javax.servlet.http.HttpServletRequest;

import org.apache.commons.configuration.HierarchicalConfiguration;

import com.eidosmedia.portal.DestructionException;
import com.eidosmedia.portal.InitializationException;
import com.eidosmedia.portal.configuration.ConfigurationException;
import com.eidosmedia.portal.directory.UserData;

public class GlobeAction extends Action<UserData> {

    @Override
    public void init() throws InitializationException {
        //Place init logic here
    }

    @Override
    public void destroy() throws DestructionException {
        //Place destroy logic here
    }

    @Override
    public void loadConfiguration(HierarchicalConfiguration configuration) throws ConfigurationException {
        //Place configuration loading logic here
    }

    @Override
    public boolean execute(UserData userData, HttpServletRequest request) throws ActionException {
        return false;
    }

}
Configuration

To use the new action you have to configure your instance of the Directory service, the files to change are the following:

  • directory.xml

  • context.xml

Into the directory.xml locate the section Actions if you cannot find this section feel free to create one. The configuration is the similar to the following:

<DirectoryService>
...
    <Actions failedActionsCron="*/30 * * * * ?" failedActionsMaxRetries="5">
        <Action
        em-type="com.eidosmedia.cobalt.action.GlobeAction" trigger="UPDATE_USER", property="value" property2="value2"/>
    </Actions>
...
</DirectoryService>

Each action must specify the trigger attribute, that is used to link the action the the correponding API. At the moment the allowed values are the following (more will be introduced):

  • CREATE_USER

  • UPDATE_USER

  • REGISTER_USER

The attributes failedActionsCron and failedActionsMaxRetries are used in case of failed action, the first attributes using a cron expression sets the interval when the failed action will be executed again, the second attribute sets the number of max retries for each failed action.

In order to make you Tomcat aware of the new action, after the export of the compiled code into a JAR file (eg: globeAction.jar) you have to edit the context.xml (the EXPORT_FOLDER placeholder must point to the folder where you have exported the JAR file):

<Context>
...
    <Resources className="org.apache.catalina.webresources.StandardRoot">
        <PreResources className="org.apache.catalina.webresources.DirResourceSet"
            base="/EXPORT_FOLDER/lib" internalPath="/" webAppMount="/WEB-INF/lib" />
    </Resources>
...
</Context>

The use of PreResource is documented at this page: https://tomcat.apache.org/tomcat-8.0-doc/config/resources.html

7.3. Signup

For the signup of a new user from a site Cobalt provides into the default theme an example of registration page, this page is a simple form that make a call to the Directory service endpoint: /directory/users/register. This esnpoint needs to configure a captcha created with reCaptcha.

7.3.1. Captcha confguration

After creating a new captcha on the reCaptcha site is necessary to associate the secret key to your web site, to do this operation go under the custom attributes section and add a new key called: recaptchaSiteKey and put here the value of your secret key.

image::captcha_key.png

After this operation is necessary to associate also thepublic key that will be used on your site to generate the captcha, the association of the public key can be made in two ways:

  • Under the tenant:

If you choose to put the key under the tenant you have to update the tenant.yml file in your Cobalt installation and add the attribute captchaSecretKey that will contains the value of the public key.

#Default tenant configuration:
#- id: missing because is null for default tenant
#- name: Mandatory, contains the tenant's name.
#- description: Contains the tenant's description.
#- captchaSecretKey: Optional, contains the reCaptcha secret key.
tenant:
  name: default tenant
  description: This is the first tenant
  captchaSecretKey: <reCAPTCHA-public-key>
  • Under the realm:

If you choose to put the key under the realm it’s necessary to call the update realm API under the Directory service: /directory/realms/update with this JSON body:

{
    "id" : "a29e4acb-6c3b-46fe-8627-d0690601f870",
    "captchaSecretKey" : "YOUR_PUBLIC_KEY"
}

8. How to do searches

In this section we list some topic related to searches.

8.1. Possibility to search on custom fields

Suppose you add a custom parameter, called 'myattr', to your node data model. The easy way to do this is by mapping this parameter during the publication on the nodeData.js script. For example:

nodeData.attributes.custom.myattr='123';

This will produce a node data with the following structure:

{
  "id": "025a-0ecd13a85f76-ee6d18ef47a7-1000",
  "version": "025a-0ecd13a85f76-ee6d18ef47a7-1000-162739609282",
...
  "attributes": {
...
    "myattr": "123"

On the ElasticSearch index this attributes is mapped into a field called nodeMeta. For example:

{
  "_index": "cobalt-dev@default.test-site_live.2020-01",
  "_type": "_doc",
  "_id": "025a-0ecd13a85f76-ee6d18ef47a7-1000-162739609282",
...
  "nodeMeta": {
...
    "myattr": "123",

If you have the version of your node you can search for it in elastic with a simple get on your index. For example in my case, the url is: http://localhost:9200/cobalt-dev@default.test-site_live.2020-01/_doc/025a-0ecd13a85f76-ee6d18ef47a7-1000-162739609282

Right now, if you’ll try to do a search (for example with a CQL statement) on that field, the result will be empty, even if you have a matching article for sure.

select from nodes where baseType='article' and attributes.myattr='123'

To enable the search over this new field, you have to map it. To do that you have to modify you default-mapping.json file, located under /cobalt-base/conf/seXX/live (where XX is your elastic version, for example 65/68/74). Once you open the file, you have to add your new attributes under the properties.nodeMeta field. For example:

{
...
    "properties": {
...
        "nodeMeta": {
            "type": "object",
            "dynamic": false,
            "properties": {
                "myattr": {
                    "type": "keyword"
                }
...

Now you have to drop the index and reindex your website (yes, you can do it from admin UI within Swing). Now, after the reindex will be completed, you can replay the same query, and you’ll get the result as expected.

In the default-mapping.json when you define your properties, you can set a lot of configuration, but this part is related to ElasticSearch and it’s not Cobalt specific. So you can refer to the official ES documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html

8.2. The search API

The web-base service expose a REST API to search over the published contents on a particular site.

In this thread we want to collect some documentation on how to use it, but feel free to post if you have also some interesting examples.

The basic API responds to this url /api/search in GET. We’ll see later the corresponding version in POST.

Here below the list of all the available parameters, with some description.

Type Name Mandatory Default Value Description

QS

query

N

-

The query string to be searched for in a full text search

QS

kinds

N

content

The kinds of desired results. In case of multiple values, they will be considered in a OR condition.

QS

baseTypes

N

-

The base types of desired results. In case of multiple values, they will be considered in a OR condition.

QS

types

N

-

The type of desired results. In case of multiple values, they will be considered in a OR condition.

QS

sections

N

-

The list of sections where to search contents. In case of multiple values, they will be considered in a OR condition. If not specified, the result will be search across all the site published contents.

QS

startDate

N

-

To be used to filter result based on the publication date. The format to used is the standard ISO8601 (yyyy-MM-dd’T’HH:mm:ssZ).

QS

endDate

N

-

To be used to filter result based on the publication date. The format to used is the standard ISO8601 (yyyy-MM-dd’T’HH:mm:ssZ).

QS

param.PARAM

N

-

Used to filter base on a specific param value. PARAM must be substituted with a valid param path

QS

tagf.FAMNAME

N

-

Used to filter base on a a specific tag family value. FAMNAME must be substituted with a valid family name

QS

limit

N

20

Used to paginate results, define the page size.

QS

offset

N

0

Used to paginate results, specify the position of the first result, considered as the first element of the result set.

QS

sorting

N

-

Defined the list of parameters to be used as sorting condition. If you want a decreasing order, put a - in front of the field name (i.e. -sys.updateTime). If not set, it will be used an internal algorithm that will provide the result based on the relevance, but giving more priority to the most recent (in the latest 3 days the result will get a 2x boost, the boost decrease to 0 in the previus 60 days).

QS

termAggregation

N

-

The list of terms for which to produce an aggregation based on the current search. If you pass more than one param of this type, the API will produce all the aggregation required. It’s possible to define the maximum number of aggregated values to produce by specifying termsAggregation=myterm:12, if the limit is not set then the default termsAggregationLimit will be used.

QS

tagsFamilies

N

-

The list of tags families for which to produce an aggregation based on the current search. The use of this field is almost an alias of the termAggregation, simply you don’t have to put tags. in front of the family name. For example, in the case of the default family tags wich is called tags you don’t have to write tagsFamilies=tags.tags, but simply tagsFamilies=tags. Differently from the termAggregation, it won’t be possible to set the limit for the aggregated result, and the default termsAggregationLimit will always be used.

QS

dateHistogramInterval

N

-

The interval to be used to produce a date histogram aggregation. If not set, the histogram won’t be produced. The format is (0-999)(s|m|d|M|y), for example 1y will produce aggregations splitted in years.

QS

dateHistogramMaxDate

N

now

The maximum date to be used to produce a date histogram aggregation.

QS

dateHistogramMinDate

N

maxDate - 1y

The minimum date to be used to produce a date histogram aggregation.

QS

aggregators

N

-

The aggregators to be used to build the json model of the result.

* QS: query string parameter

8.2.1. Search API with POST methode

There is also the POST version of the previously described API. You have to send the search parameters in the body in a JSON format.

{
  "query": "string",
  "kinds": [
    "string"
  ],
  "baseTypes": [
    "string"
  ],
  "types": [
    "string"
  ],
  "sections": [
    "string"
  ],
  "startDate": "2020-03-01T15:29:21.027Z",
  "endDate": "2020-03-01T15:29:21.027Z",
  "params": {
    "additionalProp1": [
      "string"
    ],
    "additionalProp2": [
      "string"
    ],
    "additionalProp3": [
      "string"
    ]
  },
  "tags": {
    "additionalProp1": [
      "string"
    ],
    "additionalProp2": [
      "string"
    ],
    "additionalProp3": [
      "string"
    ]
  },
  "limit": 0,
  "offset": 0,
  "sorting": [
    {
      "path": "string",
      "order": "ASC"
    }
  ],
  "termAggregations": [
    {
      "term": "string",
      "maxNumberOfValues": 0
    }
  ],
  "tagsFamiliesAggregations": [
    "string"
  ],
  "dateHistogramInterval": {
    "value": 0,
    "intervalUnit": "s"
  },
  "dateHistogramMinDate": "2020-03-01T15:29:21.027Z",
  "dateHistogramMaxDate": "2020-03-01T15:29:21.027Z",
  "aggregators": [
    "string"
  ]
}

8.3. Use the search API with a CQL statement

Cobalt Query Language (CQL) is an abstraction on top of our internal Domain Specific Language (DSL) to express a query on our repository.

The grammar supported from CQL is the expressed in the following attachment.

We expose a search API that gives you to express your search criteria by using a CQL string.

The API method is POST and the endpoint is /api/search/cql. It expects a JSON body containing the CQL query, for example:

{
  "query": "SELECT FROM NODES WHERE sys.type='article'"
}

In the case you need to apply some aggregators, you can even set an aggregators field on the JSON body with the list of the aggregators to apply.

Please note that you can even avoid to specify SELECT FROM NODES WHERE. It will be add automatically. The previous JSON body is equivalent of the following:

{
  "query": "sys.type='article'"
}

8.4. Search API with a DSL statement

In the case the simple search (or the CQL based) is not enough to express your search criteria, you can fallback to express it directly with a JSON model representing our internal DSL (Domain Specific Language) for the searches.

The API method is a POST and the endpoint is /api/search/dsl. The body must be a JSON in the following format (more details below):

{
  "properties" : {
    "offset" : 0,
    "maxResults" : 10,
    "queryScope" : {
      "type" : "timeRange",
      "fromDate" : "2016-01-01T00:00:00.000Z",
      "toDate" : "2017-01-01T00:00:00.000Z"
    },
    "highLightFields" : [ "allText" ],
    "syncIndexBeforeQuery" : false,
    "highLightPreTag" : "<em>",
    "highLightPostTag" : "</em>",
    "fragmentSize" : 150,
    "noFragments" : 3,
    "noScoreQuery" : false,
    "varargs" : { }
  },
  "bool" : {
    "and" : [ {
      "andnot" : [ {
        "type" : "exactMatch",
        "path" : "sys.type",
        "term" : "site"
      } ]
    } ],
    "or" : [ {
      "type" : "match",
      "path" : "sys.type",
      "match" : "article"
    }, {
      "type" : "exactMatch",
      "path" : "sys.type",
      "term" : "article"
    }, {
      "type" : "contain",
      "path" : "content",
      "term" : "java"
    }, {
      "type" : "in",
      "path" : "sys.type",
      "terms" : [ "article", "image", "section" ]
    }, {
      "type" : "exists",
      "path" : "attributes.sport"
    }, {
      "type" : "dateBetween",
      "path" : "pubInfo.publicationTime",
      "low" : "2018-11-01T00:00:00.000Z",
      "high" : "2019-01-18T11:29:25.292Z"
    }, {
      "type" : "dateGreater",
      "path" : "pubInfo.publicationTime",
      "low" : "2018-11-01T00:00:00.000Z",
      "includeValue" : false
    }, {
      "type" : "dateGreaterEq",
      "path" : "pubInfo.publicationTime",
      "low" : "2018-11-01T00:00:00.000Z",
      "includeValue" : true
    }, {
      "type" : "dateLess",
      "path" : "pubInfo.publicationTime",
      "high" : "2019-01-18T11:29:25.292Z",
      "includeValue" : false
    }, {
      "type" : "dateLessEq",
      "path" : "pubInfo.publicationTime",
      "high" : "2019-01-18T11:29:25.292Z",
      "includeValue" : true
    }, {
      "type" : "longBetween",
      "path" : "attributes.votes",
      "low" : 1,
      "high" : 100
    }, {
      "type" : "longGreater",
      "path" : "attributes.votes",
      "low" : 1,
      "includeValue" : false
    }, {
      "type" : "longGreaterEq",
      "path" : "attributes.votes",
      "low" : 1,
      "includeValue" : true
    }, {
      "type" : "longLess",
      "path" : "attributes.votes",
      "high" : 100,
      "includeValue" : false
    }, {
      "type" : "longLessEq",
      "path" : "attributes.votes",
      "high" : 100,
      "includeValue" : true
    }, {
      "type" : "doubleBetween",
      "path" : "attributes.percentage",
      "low" : 0.5,
      "high" : 0.9
    }, {
      "type" : "doubleGreaterEq",
      "path" : "attributes.percentage",
      "low" : 0.5,
      "includeValue" : true
    }, {
      "type" : "doubleGreater",
      "path" : "attributes.percentage",
      "low" : 0.5,
      "includeValue" : false
    }, {
      "type" : "doubleLessEq",
      "path" : "attributes.percentage",
      "high" : 0.9,
      "includeValue" : true
    }, {
      "type" : "doubleLess",
      "path" : "attributes.percentage",
      "high" : 0.9,
      "includeValue" : false
    }, {
      "type" : "stringBetween",
      "path" : "attributes.firstname",
      "low" : "Alessandro",
      "high" : "Samuele"
    }, {
      "type" : "stringGreater",
      "path" : "attributes.firstname",
      "low" : "Alessandro",
      "includeValue" : false
    }, {
      "type" : "stringGreaterEq",
      "path" : "attributes.firstname",
      "low" : "Alessandro",
      "includeValue" : true
    }, {
      "type" : "stringLess",
      "path" : "attributes.firstname",
      "high" : "Samuele",
      "includeValue" : false
    }, {
      "type" : "stringLessEq",
      "path" : "attributes.firstname",
      "high" : "Samuele",
      "includeValue" : true
    }, {
      "type" : "nested",
      "path" : "attributes.users",
      "bool" : {
          "and": [
              {
                  "type": "match",
                  "path": "firstName",
                  "match": "Jim"
              },
              {
                  "type": "match",
                  "path": "lastName",
                  "match": "Brown"
              }
          ]
       }
    }, {
      "andnot" : [ {
        "type" : "exactMatch",
        "path" : "sys.type",
        "term" : "section"
      } ]
    } ]
  },
  "postAggregationsCondition" : {
    "and" : [ {
      "type" : "exactMatch",
      "path" : "sys.type",
      "term" : "article"
    } ]
  },
  "sort" : {
    "type" : "fields",
    "sorts" : [ {
      "path" : "title",
      "order" : "DESC"
    } ]
  },
  "aggregations" : [ {
    "type" : "terms",
    "name" : "tags",
    "fieldPath" : "tags.tags",
    "size" : 5
  }, {
    "type" : "date_histogram",
    "name" : "published",
    "fieldPath" : "pubInfo.publicationTime",
    "aggregations" : [ {
      "type" : "date_histogram",
      "name" : "published",
      "fieldPath" : "pubInfo.publicationTime",
      "interval" : "1M"
    } ],
    "interval" : "1Y"
  } ]
}

In the case you need to apply some aggregators, you can even set an aggregators field on the JSON body with the list of the aggregators to apply.

8.4.1. properties

The properties block gives you the way to specify some parameter of the search. In particular:

Field Description

offset

The offset for handling pagination, default is 0.

maxResult

The page size for handling pagination, default is 10.

queryScope

The scope of the search. Some details:

* timeRange: query on a specific range of time. Example: { "type": "timeRange", "fromDate" : "2016-01-01T00:00:00.000Z", "toDate" : "2017-01-01T00:00:00.000Z" }.

* all: search on all the temporal silos that are available and online as defined by the system admistration. No parameters needed. Example: { "type": "all" }

* last: Search on last 2 temporal section (each one of 2 years) that are available and online as defined by the system admimistration. Example: { "type": "last" }

* timeShard: search on a timed shard range from the fromDate to the toDate. Example: { "type": "timeShard", "from": 1, "to": 2047, "siteShardId": 123}. Where: from is lower bound short, if null means no lower short; to is upper bound short, if null means no upper short; shardId is the site shard id, optional.

highLightFields

Add a full text field to be highlighed. Node that a field will be highlighted only the full text query is involved in the condition of the query. You can specify a list of fields, since it’s an array.

highLightPreTag

Set the pre tag for highlighting. Default is <em>.

highLightPostTag

Set the post tag for highlighting. Default is </em>`.

fragmentSize

Specify the fragment size around the highlighted text. Default is 150.

noFragments

Specify the number of fragments to retrieve. Default is 3.

syncIndexBeforeQuery

Specify if the index must be synched before querying, default is false.

noScoreQuery

No sort score query, will drop scoring for the search indipendent by the presence in the order of the SortScoreRecentBoosted. Default is false.

scrollCursorDurationSec

Define the scroll cursor duration. Default is null.

8.4.2. bool

The bool block gives you the way to specify your search criteria, making possible to express more complex boolean expression.

In the bool block you can specify a Boolean Condition. A Boolean Condition provides you three block:

  • and a list of Condition to be considered in a logic AND

  • or a list of Condition to be considered in a logic OR

  • andnot a list of Condition to be considered in logic AND where each Condition is negated

  • ornot a list of Condition to be considered in logic OR where each Condition is negated

If all the fields will be specified, they are considered togheter as joined by a virtual AND.

The bool structure in JSON is:

{
    "bool": {
        "and": [],
        "or": [],
        "andnot": [],
        "ornot": []
    }
}

A Condition can be:

  • A Boolean Condition itself. This will give you the possibility to express complex AND - OR sequences. For example: { "bool": { "and": [ { "type": "bool", "or": [ ] } ] } }.

  • A Field Condition.

A Field Condition can be:

  • contain: search for all the records that contains the term on the given path, it’s also possible to specify a boost, for example { "type" : "contain", "path" : "content", "term" : "java", "boost": 2.5 }. The contain condition is applicable only on type 'text' fields. It’s works with wildcard.

  • exactMatch: search for an exact match of the term on the given path, for example { "type" : "exactMatch", "path" : "sys.type", "term" : "article" }. The condition can be applied only on 'keyword' field type and 'text' field type with 'exact' sub-field 'keyword' type. It’s works with wildcard.

  • in: an exact match for the provided terms on the given path, for example { "type" : "in", "path" : "sys.type", "term" : [ "article", "image", "webpage" ] }. The condition can be applied only on 'keyword' field type and 'text' field type with 'exact' sub-field 'keyword' type.

  • exists: check if the given path exists, for example { "type" : "exists", "path" : "attributes.sport" }

  • match: a match query for the given path, it’s also possible to specify a boost, for example { "type" : "match", "path" : "attributes.sport", "term": "xyz", "boost": 2.5 }. The condition can be applied on 'text','keyword','long','integer' and 'double' field. It’s works with wildcard.

  • matchPhrase: a phrase match query for the given path, it’s also possible to specify a boost, for example { "type" : "matchPhrase", "path" : "attributes.sport", "phrase": "xyz", "boost": 2.5 }. The condition can be applied only on 'text' field type.

  • dateBetween: between a date, for example { "type" : "dateBetween", "path" : "pubInfo.publicationTime", "low" : "2018-11-01T00:00:00.000Z", "high" : "2019-01-18T11:29:25.292Z" }

  • dateLess: less than a date, for example { "type" : "dateLess", "path" : "pubInfo.publicationTime", "high" : "2019-01-18T11:29:25.292Z" }

  • dateLessEq: less than a date, including the value, for example { "type" : "dateLessEq", "path" : "pubInfo.publicationTime", "high" : "2019-01-18T11:29:25.292Z" }

  • dateGreater: greater than a date, for example { "type" : "dateLess", "path" : "pubInfo.publicationTime", "low" : "2019-01-18T11:29:25.292Z" }

  • dateGreaterEq: greater than a date, including the value, for example { "type" : "dateLess", "path" : "pubInfo.publicationTime", "low" : "2019-01-18T11:29:25.292Z" }

  • longBetween: between a long, for example { "type" : "longBetween", "path" : "attributes.votes", "low" : 1, "high" : 100 }

  • longLess: less than a long, for example { "type" : "longLess", "path" : "attributes.votes", "high" : 100 }

  • longLessEq: less than a long, including the value, for example { "type" : "longLess", "path" : "attributes.votes", "high" : 100 }

  • longGreater: grater than a long, for example { "type" : "longLess", "path" : "attributes.votes", "low" : 100 }

  • longGreaterEq: grater than a long, including the value, for example { "type" : "longLess", "path" : "attributes.votes", "low" : 100 }

  • doubleBetween: between a long, for example { "type" : "doubleBetween", "path" : "attributes.percentage", "low" : 0.1, "high" : 0.4 }

  • doubleLess: less than a double, for example { "type" : "doubleBetween", "path" : "attributes.percentage", "high" : 0.4 }

  • doubleLessEq: less than a double, including the value, for example { "type" : "doubleBetween", "path" : "attributes.percentage", "high" : 0.4 }

  • doubleGreater: grater than a double, for example { "type" : "doubleBetween", "path" : "attributes.percentage", "low" : 0.4 }

  • doubleGreaterEq: grater than a double, including the value, for example { "type" : "doubleBetween", "path" : "attributes.percentage", "high" : 0.4 }

  • stringBetween: between a string, for example { "type" : "stringBetween", "path" : "attributes.name", "low" : "Alessandro", "high" : "Matteo" }

  • stringLess: less than a string, for example { "type" : "stringBetween", "path" : "attributes.name", "high" : "Matteo" }

  • stringLessEq: less than a string, including the value, for example { "type" : "stringBetween", "path" : "attributes.name", "high" : "Matteo" }

  • stringGreater: grater than a string, for example { "type" : "stringBetween", "path" : "attributes.name", "low" : "Alessandro" }

  • stringGreaterEq: grater than a string, including the value, for example { "type" : "stringBetween", "path" : "attributes.name", "low" : "Alessandro" }

  • nested: available since ES 6.8, searches for all the records that satisfy a secondary condition. The nested condition can be applied on 'nested' field type only, for example { "type" : "nested", "path" : "attributes.users", "bool" : { "and": [ { "type": "match", "path": "firstName", "match": "Jim" }, { "type": "match", "path": "lastName", "match": "Brown" } ] } }. In the query sample, the 'path' 'attributes.users' is the nested type field name defined into the `default-mapping.json file, to apply to the boolean condition as third parameter. The nested type is a specialised version of the object data type that allows arrays of objects to be indexed in a way that they can be queried independently of each other.

Below an extract of the mapping file corresponding to the 'attributes.users' field:

{
...
    "attributes": {
            "type": "object",
            "dynamic": false,
            "properties": {
                "players": {
                    "type": "nested",
                    "properties": {
                        "firstName": {
                            "type": "keyword"
                         },
                        "lastName": {
                            "type": "keyword"
                        }
                    }
                }
            }
        }
...

To manage the SCORE for query operators, the elastic analyzed fields must be placed in the analyzedScoringFieldsPatterns section of the 'indexCfg.json' file, while in the exactScoringFieldsPatterns the not analyzed fields. If a field is not listed, it will be treated as a FILTER. The 'indexCfg.json' is an optional file located under /cobalt-base/conf/seXX/live (where XX is your elastic version, for example 65/68/74), the same folder of default-mapping.json file.

Below a table summarizing the main DSL field conditions:

Field condition DSL operator CQL operator Appliance types Score enabled Filter enabled Boostable Supports wildcard

FieldContain

contain

˜

text

if field is in a configurable list of pattern "analyzedScoringFieldsPatterns"

if field is NOT in a configurable list of pattern "analyzedScoringFieldsPatterns"

yes

yes

FieldMatch

match

*=

text,keyword,long,integer,double

if field is in a configurable list of pattern "analyzedScoringFieldsPatterns"

if field is NOT in a configurable list of pattern "analyzedScoringFieldsPatterns"

yes

yes

FieldMatchPhrase

matchPhrase

#=

text

if field is in a configurable list of pattern "analyzedScoringFieldsPatterns"

if field is NOT in a configurable list of pattern "analyzedScoringFieldsPatterns"

yes

no

ExactMatch

is

=

keyboard,text with "exact" keyword sub-field

if field is in a configurable list of pattern "exactScoringFieldsPatterns"

if field is NOT in a configurable list of pattern "exactScoringFieldsPatterns"

no

yes

ExactIn

in

IN

keyboard,text with "exact" keyword sub-field

if field is in a configurable list of pattern "exactScoringFieldsPatterns"

if field is NOT in a configurable list of pattern "exactScoringFieldsPatterns"

no

no

FieldNestedCondition

nested

NEST

nested

n/a

n/a

n/a

n/a

8.4.3. postAggregationsCondition

In the block postAggregationsCondition you can specify another Boolean Condition to be applied after the aggregation phase.

For example:

{
  "postAggregationsCondition": {
    "and": [ ],
    "or": [ ],
    "andnot": [ ],
    "ornot": [ ]
  }
}

8.4.4. sort

In the sort block you can define the way you want sort the result set.

You have two options:

  • Sort by fields condition

  • Sort the most recent contents, with a boost function

Sort by fields condition

To that you have to define a JSON block like this:

{
    "sort": {
        "type": "fields",
        "sort": [{
            "path": "my.attr1",
            "order": "DESC"
        }], {
            "path": "my.attr2",
            "order": "ASC"
        }, {
            "path": "my.attr3"
        }]]
    }
}
Sort by most recent with boost

The boost function will be model with a gaussian decay function. From the JSON you can control the setting of this gaussian curve.

To that you have to define a JSON block like this:

{
    "sort": {
        "type": "scoreRecentBoosted",
        "dateFieldPath": "pubInfo.publicationTime",
        "origin": "now",
        "offset": "3d",
        "scale": "60d",
        "weight": 2,
        "boostMode": null
    }
}

Where:

Field Description

dateFieldPath

the path of the date field to be used as sorting source, the default will be pubInfo.publicationTime.

origin

The central point for which the distance is calculated, default is now.

offset

If an offset is defined, the decay function will only compute the decay function for documents with a distance greater than the defined offset.

scale

Defines the distance from origin + offset at which the computed score will equal decay parameter.

weight

Define the boost for the document in the range of origin +- offset, at a scale distance the boost will be decay to 0.

boostMode

Define the boost mode, by default it operates in sum mode.

8.4.5. aggregations

In the aggregations block you can define the way you want to aggregate the result set.

Since it’s an array you can define multiple aggregations for a single search.

You have three different ways to aggregate:

  • Date histogram aggregation (for date fields)

  • Terms aggregation (for string fields)

  • Stats aggregation (for numeric fields)

Every aggregation can include another aggregation, to perform multiple levels aggregations. For example:

{
    "aggregations": [{
        "type": "date_histogram",
        "name": "myagg1",
        "fieldPath": "my.param1",
        "aggregations: [
            // other sub-aggregations
        ]
    }, {
        // other aggregations
    }]
}
Date histogram aggregation

To define a date histogram aggregation you have to define the following structure:

{
    "type": "date_histogram",
    "name": "myagg1",
    "fieldPath": "my.param1",
    "interval": "1y",
    "minDate": "2018-11-01T00:00:00.000Z",
    "maxDate": "2019-01-18T11:29:25.292Z",
    "minDocCount": 10,
    "missingValues": "N/A"
}

Where:

Field Description

type

Must be date_histogram.

name

The name of the aggregation.

interval

The interval granularity, must be something of 1m, m, 1h, h, 1d, d, 1w, w, 1M, M, 1q, q, 1y, y.

minDate

The minimum date for the aggregation.

maxDate

The maximum date for the aggregation.

minDocCount

The minimum amount of document a bucket should contains.

missingValues

The value to be used in case of a document has no value for the specified field.

Terms aggregation

To define a terms aggregation you have to define the following structure:

{
    "type": "terms",
    "name": "myagg1",
    "fieldPath": "my.param1",
    "size": 10
}

Where:

Field Description

type

Must be terms.

name

The name of the aggregation.

size

The number of terms to consider.

Stats aggregation

To define a stats aggregation you have to define the following structure:

{
    "type": "stats",
    "name": "myagg1",
    "fieldPath": "my.param1"
}

Where:

Field Description

type

Must be stats.

name

The name of the aggregation.

8.5. Scroll a huge results set with curson and a DSL statement

There is a limitation on paginating a search result, due to an Elastic limitation.

Suppose your potential result set has 1.000.000 elements. By using standard pagination, you cannot navigate across the entire data set.

As a general consideration, the sum oof your offset plus the page size should be less the 10.000.

If this is not the case, the first question you should ask yourself is whether what you are doing is correct. Hardly a user will need to navigate to page 984 (as an example). You should probably give them the possibility to refine their search, perhaps using aggregations.

However, there may be cases where you need to scroll through a large enough result set. For example, for data migration to third parties.

To support this activity we have provided a new search endpoint, very similar to the DSL based one, with the only difference that in addition to providing the resultset also returns a cursor, which can be used in subsequent calls to scroll through a large resultset.

The API to create a cursor and to retrieve the first chunk of result is /api/search/createScrollCursor, the method is POST, all the parameters and body are the same of a normal DSL search.

The only differences are that it requires to be logged in with administrative users (this API is not freely accessible) and that you can specify the duration of validity of the cursor by means of a parameter query (durationSeconds). If this value is not defined, the default duration is 100 seconds. You have to use at least once the cursor in the duration period, otherwise the cursor will expire.

In the response you’ll find the cursor on the scrollCursorId field.

Once you create the cursor, you can use another API to scroll the result set. The API endpoint is /api/search/feetchScrollCursor and the methos is GET. The parameters for this calls are:

Param name Type Default Value Description

cursorId

QS

-

The scroll cursor id you get from the create scroll cursor API.

aggregators

QS

-

The list of all the aggregators to apply to the result set.

durationSeconds

QS

100

Like in the creation, you have to perform a new call before the duration period ends, otherwise the cursor will expire.