blob: 2786c8942f9fe14920ab6e1501f624297b7bd1da [file] [log] [blame]
= openKonsequenz - How to configure the modules
:Date: 2020-03-24
:Revision: 1
:icons:
:source-highlighter: highlightjs
:highlightjs-theme: solarized_dark
:imagesdir: ../img
:iconsdir: ../img/icons
:lang: en
:encoding: utf-8
:sectanchors:
:numbered:
:xrefstyle: full
<<<
IMPORTANT: Please be sure that you have first *Portal (Auth n Auth)* and *ContactBaseData* installed and configured!
== Prerequisites
* *howToBuild* documentation read and steps completed
* *howToRun* documentation read
* *application.yml* was copied from [microservice]/src/main/resources/application.yml next to the *.jar into the deployment folder
of each microservice you want to use
CAUTION: Each [microservice] has its *own application.yml*
== Configuration of Keycloak
Following roles need to be created under the "Roles" section in the Keycloak Administration Console (Web-UI)
for your Realm:
* *grid-failure-access*
Grants access to this module, always needed
* *grid-failure-admin*
"Admin"
* *grid-failure-creator*
"Erfasser"
* *grid-failure-qualifier*
"Qualifizierer"
* *grid-failure-publisher*
"Veröffentlicher"
* *grid-failure-reader*
Read only rights
Assign the the roles to your users accordingly, for further instructions how to create or assign roles
see Keycloak manual or *Portal (Auth n Auth)* Architecture Documentation (Section: Configuration of Keycloak).
Also create a technical user with admin rights (later on refered to as *technical-username* in this document).
TIP: To create user/s and assign roles to them you can also use the Admin-CLI of keycloak via script.
An example for Windows or Linux OS you can find in *gfsBackendService/deploy/addKeycloakUsersGFI.sh*
The script has to be executed inside *keycloak-install-folder/bin*.
== Configuration of the system
The backend services are configured in *application.yml* files. If there is an *application.yml* file next to the
**.jar* file of the microservice this very *application.yml* has precedence over the default *application.yml* which resides
inside the jar file. In other words if there is no *application.yml* next to the jar file the default one inside is beeing used.
For reasons of convenience we use the method config-file outside and next to the jar.
After editing an *application.yml* file, you have to restart the microservice for changes to take effect.
Yml-files can be divided into different configuration profiles.
When starting the backend-service one has the possibility to specify
the active profile, if no profile is set the default is beeing used.
The default profile is the one which you most likely want to configure and which you are going
to use. If you want different or more profiles see the following link for further details:
https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-profiles
CAUTION: _Credentials (Username and Password)_ +
All credentials in this yml-files are hidden. Environment variables are used to set
them. Get an environment variable in a yml-file this way `${ENVIRONEMT_VARIABLE}`.
To successfully run the backend service either set the environment variable for the using platform
or replace them in the yml-file.
=== Periodically executed jobs - Cron expression
To execute periodically occuring jobs, Spring uses a form of cron expression.
They don't follow the same format as UNIX cron expressions.
Spring Cron expression consists of six sequential fields:
----
second, minute, hour, day of month, month, day(s) of week
----
.Explenation
[options="header,footer"]
|=======================
|syntax |means |example | explanation
|* |match any |"* * * * * *" | do always
|*/x |every x |"*/5 * * * * *"| do every five seconds
|? |no specification |"0 0 0 24 12 ?"| do every Christmas Day
|=======================
.Examples
[options="header,footer"]
|=======================
|syntax |means
|"0 0 * * * *" |the top of every hour of every day.
|"0 */1 * ? * *" |every minute starting at second 0.
|"0 */5 * ? * *" |every 5 minutes starting at second 0.
|"*/10 * * * * *" |every ten seconds.
|"0 0 8-10 * * *" |8, 9 and 10 o'clock of every day.
|"0 0/30 8-10 * * *" |8:00, 8:30, 9:00, 9:30 and 10 o'clock every day.
|"0 0 9-17 * * MON-FRI" |on the hour nine-to-five weekdays
|"0 0 0 24 12 ?" |every Christmas Day at midnight
|=======================
You can use the following generator but keep in mind to *not use the year*.
The spring expression hast to consist of six fields only:
https://www.freeformatter.com/cron-expression-generator-quartz.html
[#secureEndpoints]
=== Secure Endpoints
The main backend (gfsBackendService) is secured by JWTs (JSON-Web-Tokens).
The following services are secured by "Basic Authentication":
* addressImport
* mailExport
* stoerungsauskunftInterface
* SARISInterface
For each of them you will find the following parameters to configure the "Basic Authentication" in the
corresponding *application.yml*:
* *security.endpoint.user* Username for the secure endpoint
* *security.endpoint.password* Password for the secure endpoint
SAMOInterface has no public endpoints and therefore doesn't need to be secured.
[NOTE]
Besides the initial manual import of all addresses, this endpoints (in combination with Swagger) are primarily a convenient way to
test the functionality of the application.
=== Configuration of each service
The following services has to be configured and its parameters will be explained in the next sections:
* <<configuration-gfsBackendService>>
* <<configuration-addressImport>>
* <<configuration-mailExport>>
* Interfaces
** <<configuration-stoerungsauskunftInterface>>
** <<configuration-SAMOInterface>>
** <<configuration-SARISInterface>>
Some parameters explained in <<configuration-gfsBackendService>> below, are used in other services
as well an won't be explained again (for example <<configuration-section-rabbit_mq>>).
[#configuration-gfsBackendService]
==== Configuration of gfsBackendService (Main Backend)
* *spring.datasource* configuration section for the database connection
* *flyway.enabled* If enabled=true then the database migrations
will automatically performed when starting the application
(this parameter should normally be set to "false"
* *server.port* Port on which this microservice is deployed on (DEPLOYMENT_PORT). (E.g. for gfsBackendService: 9165)
* *server.max-http-header-size* Maximum size for the http-headers
* *jwt.tokenHeader* Name of the http-header which carries the authentication-token.
(should be "Authorization")
* *jwt.useStaticJwt* If set to "true" then the backend will use *jwt.staticJwt*
as Authorization-token. (This won't work for calls to other modules
like the Auth'n'Auth-Modul, because the token will be out of date)
* *process.definitions.classification.plannedMeasureDbid* Id in DB of "geplante Maßnahme" of table ref_failure_classification.
This is needed for the proccess grid.
* *reminder.status-change.minutes-before* Send reminder mail to publisher distribution group x minutes before planned end date
of the gridfailureinformation. (value in minutes, Example: 1440)
* *export-to-dmz.enabled* Switch to export periodically published gridfailureinformations to the external "Table/Map-Web-Component"
(true or false)
* *export-to-dmz.cron* Cron-expression for the export job (above). (Example: 0 */10 * ? * *)
* *swagger.enabled* Switch to enabled/disable Swagger (Endpoints). (true/false)
[#configuration-section-rabbit_mq, reftext="_RabbitMQ configuration_"]
_RabbitMQ configuration_
* *rabbitmq.host* RabbitMQ-Server (for example "localhost")
* *rabbitmq.port* Port of the RabbitMQ-Server (for example "5672")
* *rabbitmq.username* Username for the technical RabbitMQ user
* *rabbitmq.password* Password the the technical RabbitMQ user
* *rabbitmq.routingkey* Routing key for the import queue
* *rabbitmq.exchangename*: Exchange name for the import queue
* *rabbitmq.importExchange*: Exchange name for the import queue
* *rabbitmq.importQueue* Queuename for the import queue (will be created by the backend)
* *rabbitmq.importkey* Routing key for the import queue
* *rabbitmq.exportExchange*: Exchange name for the export queue
* *rabbitmq.exportQueue*: Queuename for the export queue (will be created by the backend)
* *rabbitmq.exportKey*: Exchange name for the export queue
* *rabbitmq.isMailType*: Should this channel be treated as a mail channel? +
If true messages of this channel will be sent via email.
*exportExchange* in the main backend is further divided into *channels* functionality stays the same as describe above.
[#configuration-section-settings]
_UI setting_
* *overviewMapInitialZoom*: Initial zoom-factor for the display of the overview map (default is 10)
* *detailMapInitialZoom*: Inital zoom-factor for the display of map in the detail view (for example 10)
* *overviewMapInitialLatitude*: Initial latitude for the overview map (for example 49.656634)
* *overviewMapInitialLongitude*: Initial longitude for the overview map (for example 8.423207)
* *daysInPastToShowClosedInfos*: Days in the past, before a closed failure information will not be shown any more (for example 365)
* *dataExternInitialVisibility*: *show* or *hide* data in the external map or table unless a postcode has been entered
[#configuration-section-mailtemplates]
_Mail settings (Default templates)_
* *isUseHtmlEmailBtnTemplate*: If true a button instead of a plain link is shown in emails with a direct link to the gridfailureinformation. Keep in mind *isHtmlEmail* and *isUseHtmlEmailTemplate* has to be set to true aswell in <<configuration-mailExport>>
* *emailSubjectPublishInit*: Template for the subject of the publishing e-mail
* *emailContentPublishInit*: Template for the body of the publishing e-mail
* *emailSubjectUpdateInit*: Template for the subject of the update e-mail
* *emailContentUpdateInit*: Template for the body of the update e-mail
* *emailSubjectCompleteInit*: Template for the subject of the completed e-mail
* *emailContentCompleteInit*: Template for the body of the completed e-mail
* *distribution-group-publisher.name*: Name of the distribution group for the publisher (Example: "Veröffentlicher")
* *distribution-group-publisher.distribution-text*: Template for the body of the e-mail
which is sent to the publisher distribution group
[#configuration-section-visibility-of-fields]
_Field visibility settings_
Please configure the visibilty of fields in this section using "show" or "hide"
* *visibilityConfiguration.fieldVisibility*: Here you can set the visibility of fields in the detail mask of the Failure
Information
* *visibilityConfiguration.tableInternColumnVisibility*: Here you can set the visibility of column in the
internal table
* *visibilityConfiguration.tableExternColumnVisibility*: Use "show" or "hide" to toggle the visibility of columns
in the external table
* *visibilityConfiguration.mapExternTooltipVisibility*: Here you can define which column shall be shown or hidden
in the tooltips of the external map
[#configuration-section-services]
_Services configuration_
* *services.authNAuth.name* Name for Auth'n'Auth-Service-Service (for example "authNAuthService")
* *services.authNAuth.technical-username* Technical user for Auth'n'Auth-Service-Service
* *services.authNAuth.technical-userpassword* Password for technical-user for Auth'n'Auth-Service-Service
* *services.contacts.name* Name for ContactBaseData-Service (for example "contactService")
* *services.contacts.communicationType.mobile* Designation of mobile phone number as communication type in
ContactBaseData-Service (for example "Mobil")
* *services.contacts.useModuleNameForFilter* Boolean Parameter to decide whether to filter contacts in
contact service by module name in ContactBaseData-Service ("true" / "false")
* *services.contacts.moduleName* name of module to filter data from ContactBaseData-Service
(for example "Störungsinformationstool")
* *services.sitCache.name* Name for SitCache-Service (for example "sitCacheService")
* *portalFeLoginURL* Login Url of your Portal (Auth n Auth) module.
This is needed for the creation of the direct email link to a gridfailure information message.
* *portalFeModulename* Name of this module as it is displayed in the Portal (Auth n Auth) module
below the corresponding image (example "Störungsinformationstool").
This is needed for the creation of the direct email link to a gridfailure information message.
* *authNAuthService.ribbon.listOfServers* Here one can configure the base url to the Auth'n'Auth-Service
* *contactService.ribbon.listOfServers* Here one can configure the base url to the ContactBaseData-Service
* *sitCacheService.ribbon.listOfServers* Here one can configure the base url to the SitCacheService-Service
[#configuration-addressImport]
==== Configuration of addressImport
* *utm.zoneNumber* Parameters needed to calculate latitude/longitude coordinates (32)
* *utm.zoneLetter* Parameters needed to calculate latitude/longitude coordinates (U)
https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system
* *adressimport.cleanup* Cleanup the adress database before each import? (true/false)
* *adressimport.cron* Cron-expression for the adress import job
* *adressimport.file(s)* Path of each CSV-file to import
''''
.How to start import of addresses manually
To start the import of addresses manually you can use the endpoint of the addressImporter,
alternatively you can wait until the cron expression starts the job automatically.
. Navigate with your browser to:
----
[HOSTNAME_OF_DEPLOYMENT]:[DEPLOYMENT_PORT_OF_ADDRESSIMPORTER]/swagger-ui.html#/
----
[start=2]
. Enter the credentials you have set for the endpoints of the addressImporter. (See <<secureEndpoints>>)
. Press "adress-import-controller". You should see now the following screen:
image::swagger-ui-addressimporter.png[]
[start=3]
. Press somewhere in the "POST" row and then press "Try it out"
image::swagger-ui-addressimporter-try-out.png[]
[start=4]
. Now press the big blue "Execute" button
. The import has successfully started if you receive a status code 200
[#configuration-mailExport]
==== Configuration of mailExport
* *email.sender* Email adress of your sender for the automatically sent emails.
* *email.smtpHost* SMTP-Host of your email provider
* *email.port* Port of your email provider
* *email.isHtmlEmail* If true emails are sent as Html-Emails meaning you can use html in your templates
* *email.isUseHtmlEmailTemplate* If true a responsive Openkonsequenz template is beeing used as content frame for all emails.
.Openkonsequenz email template
image::mailHog-openK-template.png[]
[#configuration-interface-general]
==== Configuration of interfaces in general
The following flags can be used for all interfaces:
* *gridFailureInformation.autopublish* If set to true all imported messages will be published automatically to the
table-map-web-components (true/false)
* *gridFailureInformation.onceOnlyImport* If set to true messages will only be imported once. If the same message (Id given from the interface) is
pulled again it won't be imported. (true/false)
* *gridFailureInformation.excludeEquals* If set to true messages will only be imported again when they have changed.
If the same message (Id given from the interface) has the same content as the already imported one it won't be imported again. (true/false)
* *gridFailureInformation.excludeAlreadyEdited* If set to true messages won't be imported again if someone have already changed them in the application
(SIT). (true/false)
[#configuration-stoerungsauskunftInterface]
==== Configuration of stoerungsauskunft-Interface
* *stoerungsauskunft.apiUrl* URL to the endpoint of "stoerungsauskunft.de"
(Development: https://stage-api-operator.stoerungsauskunft.de/api/v1.0/ )
change to production enviroment accordingly.
* *stoerungsauskunft.user* Username for stoerungsauskunft.de
* *stoerungsauskunft.password* Password for stoerungsauskunft.de
* *stoerungsauskunft.scheduling-import.enabled* Switch to enable/disable automatic import from stoerungsauskunft.de (true/false)
* *stoerungsauskunft.scheduling-import.cron* Cron-expression for automatic import from stoerungsauskunft.de (Example: 0 */20 * ? * *)
[#configuration-SAMOInterface]
==== Configuration of SAMO-Interface
* *sftp.enable-polling* Switch to enable/disable automatic import from SAMO (true/false)
* *sftp.host* IP-address of the host
You can decide if you want to access sftp via username/password or with ssh private/publicKey. If you set a path to a privateKey
this method has precedence over username/password-method and is used instead. Keep in mind *not* to set an passphrase since
confirmation of the passphrase is not possible in this automatic process.
* *sftp.privateKey* Private key in form of "file: _PATH_TO_FILE_" without quotes (Example: "file:C:\sshKeys\privateOpenKQServer_key-SAMO-Interface" without quotes)
* *sftp.privateKeyPassphrase* leave it empty
* *sftp.user* Username for the sftp access
* *sftp.password* Password for the sftp access
* *sftp.deleteRemoteFile* Switch to delete remote file after succesfully import. Leave it on true (true/false)
* *sftp.directory* Path of the remote directory on the server where the *.json file to import is located
* *sftp.fileFilter* Which file formats to import, leave it in JSON (*.json)
* *sftp.cron* Cron-expression for automatic import from SAMO (Example: 0 */10 * ? * *)
[#configuration-SARISInterface]
==== Configuration of SARIS-Interface
* *saris.apiUrl* URL to the endpoint of "SARIS"
* *saris.user* Username for SARIS
* *saris.bisToleranz* Messages from Saris will be imported from today-bisToleranz. (Value in minutes: 1440 = one day)
* *saris.vonToleranz* Messages from Saris will be imported from today+vonToleranz. (Value in minutes: 1440 = one day)
* *saris.scheduling-import.enabled* Switch to enable/disable automatic import from SARIS (true/false)
* *saris.scheduling-import.cron* Cron-expression for automatic import from SARIS (Example: 0 */20 * ? * *)
* *saris.testIntegration* This is for integration test of SARIS. You can set a test date (day, month, year) and call the
REST-Endpoint (saris/response-test). For security reasons you only see the respone in the logs.
An successfully respones for example you get with (11 ,2 ,2020).