Dependencies
Before setting up the plugin, ensure you have the following dependencies:
- A MongoDB database for storing notification templates and records
- Access to the Kafka instance used by the FlowX.AI Engine
- A Redis instance for caching notification templates
- An S3-compatible file storage solution (for example MinIO) if you need to attach documents to notifications
Authorization configuration
Changed in v5.5.0 — The default SECURITY_TYPE has changed from oauth2 to jwt-public-key. Opaque-token introspection has been removed.
Set these variables to connect to your identity management platform:
| Environment Variable | Description | Default Value |
|---|
SECURITY_TYPE | Security type | jwt-public-key |
SECURITY_OAUTH2_BASESERVERURL | Base URL of the OAuth2/OIDC server | |
MongoDB configuration
The only thing that needs to be configured is the DB access info, the rest will be handled by the plugin.
| Environment Variable | Description | Default Value |
|---|
SPRING_DATA_MONGODB_URI | MongoDB connection URI | mongodb://${DB_USERNAME}:${DB_PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-arbiter-headless:27017/notification-plugin |
DB_USERNAME | Username for runtime MongoDB connection | notification-plugin |
DB_PASSWORD | Password for runtime MongoDB connection | password |
Redis configuration
Notification Plugin uses Redis for caching. Configure Redis connection using the standard Redis environment variables.
Quick reference:
| Environment Variable | Description | Example Value |
|---|
SPRING_DATA_REDIS_HOST | Redis server hostname | localhost |
SPRING_DATA_REDIS_PORT | Redis server port | 6379 |
SPRING_DATA_REDIS_PASSWORD | Redis authentication password | yourpassword |
REDIS_TTL | Cache TTL in milliseconds | 5000000 |
For complete Redis configuration including Sentinel mode, Cluster mode, and SSL/TLS setup, see the Redis Configuration guide.
Kafka configuration
Core Kafka settings
| Environment Variable | Description | Default Value |
|---|
SPRING_KAFKA_BOOTSTRAPSERVERS | Address of the Kafka server(s) | localhost:9092 |
SPRING_KAFKA_SECURITY_PROTOCOL | Security protocol for Kafka connections | PLAINTEXT |
SPRING_KAFKA_CONSUMER_GROUPID | Consumer group identifier | notification-plugin-consumer |
KAFKA_MESSAGE_MAX_BYTES | Maximum message size (bytes) | 52428800 (50 MB) |
KAFKA_AUTHEXCEPTIONRETRYINTERVAL | Retry interval after authorization exceptions (seconds) | 10 |
KAFKA_CONSUMER_THREADS | Number of consumer threads | 1 |
Consumer error handling
| Environment Variable | Description | Default Value |
|---|
KAFKA_CONSUMER_ERRORHANDLING_ENABLED | Enable consumer error handling | false |
KAFKA_CONSUMER_ERRORHANDLING_RETRIES | Number of retry attempts for failed messages | 0 |
KAFKA_CONSUMER_ERRORHANDLING_RETRYINTERVAL | Interval between retries (milliseconds) | 1000 |
OAuth authentication (when using SASL_PLAINTEXT)
| Environment Variable | Description | Default Value |
|---|
KAFKA_OAUTH_CLIENT_ID | OAuth client ID | kafka |
KAFKA_OAUTH_CLIENT_SECRET | OAuth client secret | kafka-secret |
KAFKA_OAUTH_TOKEN_ENDPOINT_URI | OAuth token endpoint | kafka.auth.localhost |
Topic naming configuration
| Environment Variable | Description | Default Value |
|---|
KAFKA_TOPIC_NAMING_PACKAGE | Package prefix for topic names | ai.flowx. |
KAFKA_TOPIC_NAMING_ENVIRONMENT | Environment segment for topic names | |
KAFKA_TOPIC_NAMING_VERSION | Version suffix for topic names | .v1 |
KAFKA_TOPIC_NAMING_SEPARATOR | Primary separator for topic names | . |
KAFKA_TOPIC_NAMING_SEPARATOR2 | Secondary separator for topic names | - |
KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERN | Engine receive pattern | engine.receive. |
Topic configurations
Each action in the service corresponds to a Kafka event on a specific topic. Configure the following topics:
OTP topics
| Environment Variable | Description | Default Value |
|---|
KAFKA_TOPIC_OTP_GENERATE_IN | Topic for incoming OTP generation requests | ai.flowx.plugin.notification.trigger.generate.otp.v1 |
KAFKA_TOPIC_OTP_GENERATE_OUT | Topic for OTP generation results | ai.flowx.engine.receive.plugin.notification.generate.otp.results.v1 |
KAFKA_TOPIC_OTP_VALIDATE_IN | Topic for incoming OTP validation requests | ai.flowx.plugin.notification.trigger.validate.otp.v1 |
KAFKA_TOPIC_OTP_VALIDATE_OUT | Topic for OTP validation results | ai.flowx.engine.receive.plugin.notification.validate.otp.results.v1 |
Notification topics
| Environment Variable | Description | Default Value |
|---|
KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN | Topic for incoming notification requests | ai.flowx.plugin.notification.trigger.send.notification.v1 |
KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT | Topic for notification delivery confirmations | ai.flowx.engine.receive.plugin.notification.confirm.send.notification.v1 |
KAFKA_TOPIC_NOTIFICATION_EXTERNAL_OUT | Topic for forwarding notifications to external systems | ai.flowx.plugin.notification.trigger.forward.notification.v1 |
Audit topic
| Environment Variable | Description | Default Value |
|---|
KAFKA_TOPIC_AUDIT_OUT | Topic for sending audit logs | ai.flowx.core.trigger.save.audit.v1 |
Resource usages topics
| Environment Variable | Description | Default Value |
|---|
KAFKA_TOPIC_RESOURCESUSAGES_REFRESH | Topic for resource usages refresh events | ai.flowx.application-version.resources-usages.refresh.v1 |
KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_RESPONSE | Topic for sub-resource validation responses | ai.flowx.application-version.resources-usages.sub-res-validation.response.v1 |
KAFKA_TOPIC_APPLICATION_RESOURCE_USAGES_OUT | Topic for bulk resource usage operations | ai.flowx.application-version.resources-usages.operations.bulk.v1 |
File storage configuration
Based on use case you can use directly a file system or an S3 compatible cloud storage solution (for example min.io).
The file storage solution can be configured using the following environment variables:
| Environment Variable | Description | Default Value |
|---|
APPLICATION_FILESTORAGE_TYPE | Storage type to use (s3 or fileSystem) | s3 |
APPLICATION_FILESTORAGE_DISKDIRECTORY | Directory for file storage when using filesystem | MS_SVC_NOTIFICATION |
APPLICATION_FILESTORAGE_S3_ENABLED | Enable S3-compatible storage | true |
APPLICATION_FILESTORAGE_S3_SERVERURL | URL of MinIO or S3-compatible server | http://minio-service:9000 |
APPLICATION_FILESTORAGE_S3_ENCRYPTIONENABLED | Enable server-side encryption | false |
APPLICATION_FILESTORAGE_S3_ACCESSKEY | Access key for S3 | minio |
APPLICATION_FILESTORAGE_S3_SECRETKEY | Secret key for S3 | secret |
APPLICATION_FILESTORAGE_S3_BUCKETPREFIX | Prefix for bucket names | qdevlocal-preview-paperflow |
When using S3-compatible storage for notifications with attachments, the S3 user configured through APPLICATION_FILESTORAGE_S3_ACCESSKEY and APPLICATION_FILESTORAGE_S3_SECRETKEY must have read access to multiple buckets beyond its own:Required bucket access:
- Own bucket - defined by
APPLICATION_FILESTORAGE_S3_BUCKETPREFIX
- Documents Plugin bucket - defined in the Documents Plugin configuration via
APPLICATION_FILESTORAGE_S3_BUCKETPREFIX
- CMS Core public bucket - defined in the CMS Core configuration via
APPLICATION_FILESTORAGE_S3_BUCKETNAME
- Integration Designer bucket - defined in the Integration Designer configuration via
APPLICATION_FILESTORAGE_S3_BUCKETPREFIX
Ensure your S3 user has appropriate read permissions to all releva dnt buckets to avoid attachment failures.
SMTP setup
Configure SMTP settings for sending email notifications:
| Environment Variable | Description | Default Value |
|---|
SIMPLEJAVAMAIL_SMTP_HOST | SMTP server hostname | smtp.gmail.com |
SIMPLEJAVAMAIL_SMTP_PORT | SMTP server port | 587 |
SIMPLEJAVAMAIL_SMTP_USERNAME | SMTP server username | notification.test@flowx.ai |
SIMPLEJAVAMAIL_SMTP_PASSWORD | SMTP server password | paswword |
SIMPLEJAVAMAIL_TRANSPORTSTRATEGY | Email transport strategy (e.g., SMTP, EXTERNAL_FORWARD) | SMTP |
APPLICATION_MAIL_FROM_EMAIL | Default sender email address | notification.test@flowx.ai |
APPLICATION_MAIL_FROM_NAME | Default sender name | Notification Test |
Email attachments configuration
Configure handling of email attachments:
| Environment Variable | Description | Default Value |
|---|
SPRING_HTTP_MULTIPART_MAXFILESIZE | Maximum file size for attachments | 15MB |
SPRING_HTTP_MULTIPART_MAXREQUESTSIZE | Maximum request size for multipart uploads | 15MB |
OTP configuration
Configure One-Time Password generation and validation:
| Environment Variable | Description | Default Value |
|---|
FLOWX_OTP_LENGTH | Number of characters in generated OTPs | 4 |
FLOWX_OTP_EXPIRETIMEINSECONDS | Expiry time for OTPs (seconds) | 6000 (10 minutes) |
Logging configuration
Control logging levels for different components:
| Environment Variable | Description | Default Value |
|---|
LOGGING_LEVEL_ROOT | Root logging level | - |
LOGGING_LEVEL_APP | Application-specific log level | DEBUG |
LOGGING_LEVEL_MONGO_DRIVER | MongoDB driver log level | INFO |
LOGGING_LEVEL_THYMELEAF | Thymeleaf template engine log level | INFO |
LOGGING_LEVEL_FCM_CLIENT | Firebase Cloud Messaging client log level | OFF |
LOGGING_LEVEL_REDIS | Redis/Lettuce client log level | OFF |
CAS lib configuration
| Environment Variable | Description | Default Value |
|---|
FLOWX_SPICEDB_HOST | SpiceDB server hostname | spicedb |
FLOWX_SPICEDB_PORT | SpiceDB server port | 50051 |
FLOWX_SPICEDB_TOKEN | SpiceDB authentication token | spicedb-token |
Usage notes
Topic naming convention
Topics follow a standardized naming convention:
- Example:
ai.flowx.plugin.notification.trigger.generate.otp.v1
- Structure:
{package}{environment}.{component}.{action}.{subject}.{version}
Consumer error handling
When KAFKA_CONSUMER_ERRORHANDLING_ENABLED is set to true:
- The application will retry processing failed messages according to
KAFKA_CONSUMER_ERRORHANDLING_RETRIES
- Between retries, the application will wait for the duration specified by
KAFKA_CONSUMER_ERRORHANDLING_RETRYINTERVAL
For example, if KAFKA_CONSUMER_ERRORHANDLING_RETRYINTERVAL is set to 5000 (5 seconds) and KAFKA_CONSUMER_ERROR_HANDLING_RETRIES is set to 5, the consumer application will make up to 5 attempts, waiting 5 seconds between each attempt.
Message size configuration
The KAFKA_MESSAGE_MAX_BYTES setting affects multiple Kafka properties:
spring.kafka.producer.properties.message.max.bytes
spring.kafka.producer.properties.max.request.size
spring.kafka.consumer.properties.max.partition.fetch.bytes
OAuth authentication
When using the ‘kafka-auth’ profile, the security protocol changes to ‘SASL_PLAINTEXT’ and requires OAuth configuration via the KAFKA_OAUTH_* variables.