Contents Menu Expand Light mode Dark mode Auto light/dark mode
Light Logo Dark Logo
Aiven.io GitHub
Log in Start free trial
Light Logo Dark Logo
Start now
Light Logo Dark Logo
  • Platform
    • Concepts
      • Authentication tokens
      • Availability zones
      • Billing
        • Tax information regarding Aiven services
        • Billing groups
        • Corporate billing
        • Hourly billing model for all services
      • Beta services
      • Cloud security
      • About logging, metrics and alerting
      • Organizations, projects, and managing access permissions
      • Service forking
      • Backups at Aiven
      • Service power cycle
      • Service memory limits
      • Out of memory conditions
      • Static IP addresses
      • TLS/SSL certificates
      • Bring your own account (BYOA)
      • Dynamic Disk Sizing
      • Enhanced compliance environments (ECE)
      • Disaster Recovery testing scenarios
      • Choosing a time series database
      • Service level agreement
      • Maintenance window
      • Service resources
      • Service integration
    • HowTo
      • User/Access management
        • Change your email address
        • Create an authentication token
        • Create a new Aiven service user
        • Create and manage teams
        • Enable Aiven password
        • Manage user two-factor authentication
        • Get technical notifications
        • Reactivate suspended projects
        • Disable platform authentication
      • Organization and project management
        • Create organizations and organizational units
        • Manage projects
      • Service management
        • Create a new service
        • Fork your service
        • Pause or terminate your service
        • Rename a service
        • Scale your service
        • Migrate service to another cloud or region
        • Migrate a public service to a Virtual Private Cloud (VPC)
        • Recover a deleted service
        • Add additional storage
        • Tag your Aiven resources
        • Search for services
        • Access service logs
        • Prepare services for high load
        • Create a service integration
      • Network management
        • Download a CA certificate
        • Restrict network access to your service
        • Enable public access in a VPC
        • Manage static IP addresses
        • Handle resolution errors of private IP addresses
        • Attach a VPC to an AWS Transit Gateway
        • Manage Virtual Private Cloud (VPC) peering
        • Set up Virtual Private Cloud (VPC) peering on Google Cloud Platform (GCP)
        • Set up Virtual Private Cloud (VPC) peering on AWS
        • Set up Azure virtual network peering
        • Use AWS PrivateLink with Aiven services
        • Use Azure Private Link with Aiven services
      • Monitoring management
        • Monitoring services
        • Use Prometheus with Aiven
        • Increase metrics limit setting for Datadog
        • Access JMX metrics via Jolokia
      • Billing management
        • Manage payment card
        • Create billing groups
        • Manage billing groups
        • Billing contact
        • Update your tax status
        • Assign projects to billing groups
        • Solve payment issues when upgrading to larger service plans
        • Request service custom plans
        • Set up Google Cloud Marketplace
        • Move to Google Cloud Marketplace
        • Set up Azure Marketplace
      • SAML Authentication
        • Set up SAML authentication
        • Setting up SAML with OneLogin
        • Setting up SAML with Azure
        • Setting up SAML with Okta
        • Setting up SAML with Auth0
      • Get support in the Aiven console
    • Reference
      • EOL for major versions of Aiven Services
      • List of available cloud regions
      • Password policy
      • Project member privileges
      • Default service IP address and hostname
  • Integrations
    • Datadog
      • Send metrics to Datadog
      • Send logs to Datadog
    • Amazon CloudWatch
      • CloudWatch Metrics
      • CloudWatch Logs
        • Send logs to AWS CloudWatch from Aiven web console
        • Send logs to AWS CloudWatch from Aiven client
    • Google Cloud Logging
    • RSyslog
      • Logtail
      • Loggly
    • Send logs to Elasticsearch®
    • Prometheus system metrics
  • Aiven tools
    • Aiven Console
    • Aiven CLI
      • avn account
        • avn account authentication-method
        • avn account team
      • avn billing-group
      • avn card
      • avn cloud
      • avn credits
      • avn events
      • avn mirrormaker
      • avn project
      • avn service
        • avn service acl
        • avn service connection-info
        • avn service connection-pool
        • avn service connector
        • avn service database
        • avn service es-acl
        • avn service flink
        • avn service integration
        • avn service m3
        • avn service privatelink
        • avn service schema-registry-acl
        • avn service index
        • avn service tags
        • avn service topic
        • avn service user
      • avn ticket
      • avn user
        • avn user access-token
      • avn vpc
    • Aiven API
      • API examples
    • Aiven Terraform provider
      • Get started
      • HowTo
        • Enable debug logging
        • Upgrade the Aiven Terraform Provider from v1 to v2
        • Upgrade the Aiven Terraform Provider from v2 to v3
        • Use PostgreSQL provider alongside Aiven Terraform Provider
        • Promote PostgreSQL read replica to master
        • Upgrade to OpenSearch® with Terraform
        • Azure virtual network peering
      • Concepts
        • Data sources in Terraform
      • Reference
        • Aiven Terraform Cookbook
          • Apache Kafka and OpenSearch
          • Multicloud PostgreSQL
          • Apache Kafka and Apache Flink
          • Apache Kafka and Apache MirrorMaker
          • Apache Kafka with Karapace
          • Visualize PostgreSQL metrics with Grafana
          • PostgreSQL with custom configs
          • Apache Kafka MongoDB Source Connector
          • Debezium Source Connector across clouds
          • Apache Kafka with topics and HTTP sink connector
          • Apache Kafka with custom configurations
          • M3 and M3 Aggregator
          • PostgreSQL® Read Replica
          • Configure ClickHouse user's access
          • Apache Kafka and ClickHouse
          • ClickHouse and PostgreSQL
        • Troubleshooting
          • Private access error when using VPC
    • Aiven Operator for Kubernetes
  • Apache Kafka
    • Get started
    • Sample data generator
    • Concepts
      • Upgrade procedure
      • Scaling options
      • Access control lists permission mapping
      • Schema registry authorization
      • Apache Kafka® REST API
      • Compacted topics
      • Partition segments
      • Authentication types
      • NOT_LEADER_FOR_PARTITION errors
      • Configuration backups for Apache Kafka®
    • HowTo
      • Code samples
        • Connect with Python
        • Connect with Java
        • Connect with Go
        • Connect with command line
        • Connect with NodeJS
      • Tools
        • Configure properties for Apache Kafka® toolbox
        • Use kcat with Aiven for Apache Kafka®
        • Connect to Apache Kafka® with Conduktor
        • Use Kafdrop Web UI with Aiven for Apache Kafka®
        • Use Provectus® UI for Apache Kafka® with Aiven for Apache Kafka®
        • Use Kpow with Aiven for Apache Kafka®
        • Connect Aiven for Apache Kafka® with Klaw
      • Security
        • Configure Java SSL keystore and truststore to access Apache Kafka
        • Manage users and access control lists
        • Monitor and alert logs for denied ACL
        • Use SASL Authentication with Apache Kafka®
        • Renew and Acknowledge service user SSL certificates
        • Encrypt data using a custom serde
      • Administration tasks
        • Schema registry
          • Use Karapace with Aiven for Apache Kafka®
        • Get the best from Apache Kafka®
        • Manage configurations with Apache Kafka® CLI tools
        • Manage Apache Kafka® parameters
        • View and reset consumer group offsets
        • Configure log cleaner for topic compaction
        • Prevent full disks
        • Set Apache ZooKeeper™ configuration
        • Avoid OutOfMemoryError errors in Aiven for Apache Kafka®
      • Integrations
        • Integration of logs into Apache Kafka® topic
        • Use Apache Kafka® Streams with Aiven for Apache Kafka®
        • Use Apache Flink® with Aiven for Apache Kafka®
        • Configure Apache Kafka® metrics sent to Datadog
        • Use ksqlDB with Aiven for Apache Kafka
        • Add kafka.producer. and kafka.consumer Datadog metrics
      • Topic/schema management
        • Creating an Apache Kafka® topic
        • Create Apache Kafka® topics automatically
        • Get partition details of an Apache Kafka® topic
        • Use schema registry in Java with Aiven for Apache Kafka®
        • Change data retention period
    • Reference
      • Advanced parameters
      • Metrics available via Prometheus
    • Apache Kafka Connect
      • Getting started
      • Concepts
        • List of available Apache Kafka® Connect connectors
        • JDBC source connector modes
        • Causes of “connector list not currently available”
      • HowTo
        • Administration tasks
          • Get the best from Apache Kafka® Connect
          • Bring your own Apache Kafka® Connect cluster
          • Enable Apache Kafka® Connect on Aiven for Apache Kafka®
          • Enable Apache Kafka® Connect connectors auto restart on failures
          • Manage Kafka Connect logging level
          • Request a new connector
        • Source connectors
          • PostgreSQL to Kafka
          • PostgreSQL to Kafka with Debezium
          • MySQL to Kafka
          • MySQL to Kafka with Debezium
          • SQL Server to Kafka
          • SQL Server to Kafka with Debezium
          • MongoDB to Kafka
          • Handle PostgreSQL® node replacements when using Debezium for change data capture
          • MongoDB to Kafka with Debezium
          • Cassandra to Kafka
          • MQTT to Kafka
          • Google Pub/Sub to Kafka
          • Google Pub/Sub Lite to Kafka
          • Couchbase to Kafka
        • Sink connectors
          • Kafka to another database with JDBC
          • Configure AWS for an S3 sink connector
          • Kafka to S3 (Aiven)
          • Use AWS IAM assume role credentials provider
          • Kafka to S3 (Confluent)
          • Configure GCP for a Google Cloud Storage sink connector
          • Kafka to GCS
          • Configure GCP for a Google BigQuery sink connector
          • Kafka to Big Query
          • Kafka to OpenSearch
          • Kafka to Elasticsearch
          • Configure Snowflake for a sink connector
          • Kakfa to Snowflake
          • Kafka to HTTP
          • Kafka to MongoDB
          • Kafka to MongoDB (by Lenses)
          • Kafka to InfluxDB
          • Kafka to Redis
          • Kafka to Cassandra
          • Kafka to Couchbase
          • Kafka to Google Pub/Sub
          • Kafka to Google Pub/Sub Lite
          • Kafka to Splunk
          • Kafka to MQTT
      • Reference
        • Advanced parameters
        • AWS S3 sink connector naming and data format
          • S3 sink connector by Aiven naming and data formats
          • S3 sink connector by Confluent naming and data formats
        • Google Cloud Storage sink connector naming and data formats
        • Metrics available via Prometheus
    • Apache Kafka MirrorMaker2
      • Getting started
      • Concepts
        • Disaster recovery and migration
          • Active-Active Setup
          • Active-Passive Setup
        • Topics included in a replication flow
        • MirrorMaker 2 common parameters
      • HowTo
        • Integrate an external Apache Kafka® cluster in Aiven
        • Set up an Apache Kafka® MirrorMaker 2 replication flow
        • Setup Apache Kafka® MirrorMaker 2 monitoring
        • Remove topic prefix when replicating with Apache Kafka® MirrorMaker 2
      • Reference
        • List of advanced parameters
        • Known issues
        • Terminology for Aiven for Apache Kafka® MirrorMaker 2
    • Karapace
      • Getting started with Karapace
      • Concepts
        • Karapace schema registry authorization
        • ACLs definition
        • Apache Kafka® REST proxy authorization
      • HowTo
        • Enable Karapace schema registry and REST APIs
        • Enable Karapace schema registry authorization
        • Enable Apache Kafka® REST proxy authorization
        • Manage Karapace schema registry authorization
        • Manage Apache Kafka® REST proxy authorization
  • Apache Flink
    • Overview
      • Architecture overview
      • Aiven for Apache Flink features
      • Managed service features
      • Plans and pricing
      • Limitations
    • Quickstart
    • Concepts
      • Aiven Flink applications
      • Built-in SQL editor
      • Flink tables
      • Checkpoints
      • Savepoints
      • Event and processing times
      • Watermarks
      • Windows
      • Stardand and upsert connectors
      • Settings for Apache Kafka® connectors
    • HowTo
      • Get started
      • Data service integrations
      • Aiven for Apache Flink applications
        • Create Apache Flink applications
        • Manage Apache Flink applications
      • Apache Flink tables
        • Manage Apache Flink tables
        • Create Apache Flink tables with data sources
          • Apache Kafka®-based Apache Flink® table
          • Confluent Avro-based Apache Flink® table
          • PostgreSQL®-based Apache Flink® table
          • OpenSearch®-based Apache Flink® table
          • Slack-based Apache Flink® table
          • DataGen-based Apache Flink® table
      • Manage cluster
      • Advanced topics
        • Define OpenSearch® timestamp data in SQL pipeline
    • Reference
      • Advanced parameters
  • Apache Cassandra
    • Get started
    • Concepts
      • Tombstones in Apache Cassandra®
    • HowTo
      • Code samples
        • Connect with Python
        • Connect with Go
      • Connect with cqlsh
      • Perform a stress test using nosqlbench
      • Use DSBULK to load, unload and count data on Aiven service for Cassandra®
    • Reference
      • Advanced parameters
  • ClickHouse
    • Overview
      • Features overview
      • Architecture overview
      • Plans and pricing
      • Limits and limitations
    • Quickstart
    • Concepts
      • Online analytical processing
      • Databases and tables
      • Columnar databases
      • Indexing and data processing
      • Disaster recovery
      • Strings
    • HowTo
      • Get started
        • Load data
        • Secure a service
      • Connect to service
        • Connect with the ClickHouse client
        • Connect with Go
        • Connect with Python
        • Connect with Node.js
        • Connect with PHP
        • Connect with Java
      • Manage service
        • Manage users and roles
        • Manage user permissions with Terraform
        • Manage databases and tables
        • Query databases
        • Create materialized views
        • Monitor performance
        • Read and write data across shards
        • Copy data across ClickHouse servers
      • Manage cluster
      • Integrate service
        • Connect to Grafana
        • Connect to Apache Kafka
        • Connect to PostgreSQL
        • Connect a service as a data source (Apache Kafka and PostgreSQL)
        • Connect services via integration databases
        • Connect to external DBs with JDBC
    • Reference
      • Supported table engines
      • ClickHouse metrics in Grafana
      • Formats for ClickHouse-Kafka data exchange
      • Advanced parameters
  • Grafana
    • Get started
    • HowTo
      • Dashboard preview for Aiven for Grafana®
      • Log in to Aiven for Grafana®
      • Replacing a string in Grafana® dashboards
      • Rotating Grafana® service credentials
      • Send emails from Aiven for Grafana®
    • Reference
      • Advanced parameters
      • Plugins
  • InfluxDB
    • Get started
    • Concepts
      • Continuous queries
      • InfluxDB® retention policies
    • HowTo
      • Migrate data from self-hosted InfluxDB® to Aiven
    • Reference
      • Advanced parameters for Aiven for InfluxDB®
  • M3DB
    • Get started
    • Concepts
      • Aiven for M3 components
      • About M3DB namespaces and aggregation
      • About scaling M3
    • HowTo
      • Visualize M3DB data with Grafana®
      • Monitor Aiven services with M3DB
      • Use M3DB as remote storage for Prometheus
      • Write to M3 from Telegraf
      • Telegraf to M3 to Grafana® Example
      • Write data to M3DB with Go
      • Write data to M3DB with PHP
      • Write data to M3DB with Python
    • Reference
      • Terminology
      • Advanced parameters
      • Advanced parameters M3Aggregator
  • MySQL
    • Get started
    • Concepts
      • MySQL max_connections
      • Understand MySQL backups
      • Understanding MySQL memory usage
      • MySQL replication
      • MySQL tuning for concurrency
    • HowTo
      • Code samples
        • Connect to MySQL from the command line
        • Using mysqlsh
        • Using mysql
        • Connect to MySQL with Python
        • Connect to MySQL using MySQLx with Python
        • Connect to MySQL with Java
        • Connect to MySQL with PHP
      • Create additional MySQL® databases
      • Connect to MySQL with MySQL Workbench
      • Migrate to Aiven for MySQL from an external MySQL
      • Perform migration check
      • Prevent MySQL disk full
      • Reclaim disk space
      • Identify disk usage issues
      • Disable foreign key checks
      • Enable slow query logging
      • Backup and restore MySQL data using mysqldump
      • Create new tables without primary keys
      • Create missing primary keys
    • Reference
      • Advanced parameters
      • Resource capability per plan
  • OpenSearch
    • Get started
    • Sample dataset: recipes
    • Concepts
      • Access control
      • Backups
      • Indices
      • Aggregations
      • High availability in Aiven for OpenSearch®
      • OpenSearch® vs Elasticsearch
      • Optimal number of shards
      • When to create a new index
      • OpenSearch® cross-cluster replication beta
    • HowTo
      • Use Aiven for OpenSearch® with cURL
      • Migrate Elasticsearch data to Aiven for OpenSearch®
      • Manage OpenSearch® log integration
      • Upgrade to OpenSearch®
      • Upgrade Elasticsearch clients to OpenSearch®
      • Setup cross cluster replication for Aiven for OpenSearch® beta
      • Connect with NodeJS
      • Connect with Python
      • Search with Python
      • Search with NodeJS
      • Aggregation with NodeJS
      • Control access to content in your service
      • Restore an OpenSearch® backup
      • Copy data from OpenSearch to Aiven for OpenSearch® using elasticsearch-dump
      • Copy data from Aiven for OpenSearch® to AWS S3 using elasticsearch-dump
      • Set index retention patterns
      • Create alerts with OpenSearch® API
      • Integrate with Grafana®
      • Handle low disk space
    • OpenSearch Dashboards
      • Getting started
      • HowTo
        • Getting started with Dev tools
        • Create alerts with OpenSearch® Dashboards
    • Reference
      • Plugins
      • Advanced parameters
      • Automatic adjustment of replication factors
      • REST API endpoint access
      • Low disk space watermarks
  • PostgreSQL
    • Get started
    • Sample dataset: Pagila
    • Concepts
      • About aiven-db-migrate
      • Perform DBA-type tasks in Aiven for PostgreSQL®
      • High availability
      • PostgreSQL® backups
      • Connection pooling
      • About PostgreSQL® disk usage
      • About TimescaleDB
      • Upgrade and failover procedures
    • HowTo
      • Code samples
        • Connect with Go
        • Connect with Java
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
      • DBA tasks
        • Create additional PostgreSQL® databases
        • Perform a PostgreSQL® major version upgrade
        • Install or update an extension
        • Create manual PostgreSQL® backups
        • Restore PostgreSQL® from a backup
        • Migrate to a different cloud provider or region
        • Claim public schema ownership
        • Manage connection pooling
        • Access PgBouncer statistics
        • Use the PostgreSQL® dblink extension
        • Use the PostgreSQL® pg_repack extension
        • Enable JIT in PostgreSQL®
        • Identify and repair issues with PostgreSQL® indexes with REINDEX
        • Identify PostgreSQL® slow queries
        • Detect and terminate long-running queries
        • Optimize PostgreSQL® slow queries
        • Check and avoid transaction ID wraparound
        • Prevent PostgreSQL® full disk issues
      • Replication and migration
        • Create and use read-only replicas
        • Set up logical replication to Aiven for PostgreSQL®
        • Migrate to Aiven for PostgreSQL® with aiven-db-migrate
          • Enable logical replication on Amazon Aurora PostgreSQL®
          • Enable logical replication on Amazon RDS PostgreSQL®
          • Enable logical replication on Google Cloud SQL
        • Migrate to Aiven for PostgreSQL® with pg_dump and pg_restore
        • Migrating to Aiven for PostgreSQL® using Bucardo
        • Migrate between PostgreSQL® instances using aiven-db-migrate in Python
      • Integrations
        • Connect with psql
        • Connect with pgAdmin
        • Connect with Rivery
        • Connect with Skyvia
        • Connect with Zapier
        • Database monitoring with Datadog
        • Visualize PostgreSQL® data with Grafana®
        • Monitor PostgreSQL® metrics with Grafana®
        • Monitor PostgreSQL® metrics with pgwatch2
        • Connect two PostgreSQL® services via datasource integration
        • Report and analyze with Google Data Studio
    • Reference
      • High CPU load
      • Advanced parameters
      • Extensions
      • Metrics exposed to Grafana
      • Connection limits per plan
      • Resource capability per plan
      • Terminology
      • Idle connections
      • Use of deprecated TLS Versions
      • Supported log formats
  • Redis
    • Get started
    • Concepts
      • High availability in Aiven for Redis®*
      • Lua scripts with Aiven for Redis®*
      • Memory usage, on-disk persistence and replication in Aiven for Redis®*
    • HowTo
      • Code samples
        • Connect with redis-cli
        • Connect with Go
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
        • Connect with Java
      • DBA tasks
        • Configure ACL permissions in Aiven for Redis®*
        • Migrate from Redis®* to Aiven for Redis®*
      • Estimate maximum number of connection
      • Manage SSL connectivity
      • Handle warning overcommit_memory
      • Benchmark performance
    • Reference
      • Advanced parameters
  • Community
    • Documentation
      • Create anonymous links
      • Create orphan pages
      • Rename files and adding redirects
    • Catch the Bus - Aiven challenge with ClickHouse
    • Rolling - Aiven challenge with Apache Kafka and Apache Flink
  • Tutorials
    • Streaming anomaly detection with Apache Flink, Apache Kafka and PostgreSQL
Start free trial Log in GitHub Aiven.io
Back to top

avn service flink#

Here you’ll find the full list of commands for avn service flink.

Warning

The Aiven for Apache Flink® CLI commands have been updated, and to execute them, you must use aiven-client version 2.18.0.

Manage Aiven for Apache Flink® applications#

avn service flink create-application#

Create a new Aiven for the Apache Flink® application in the specified service and project.

Parameter

Information

project

The name of the project

service_name

The name of the service

application_properties

Application properties definition for Aiven for Flink application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_properties parameter should contain the following common properties in JSON format:

Parameter

Information

name

The name of the application

application_version

(Optional)The version of the application

Example: Creates an Aiven for Apache Flink application named DemoApp in the service flink-democli and project my-project.

avn service flink create-application flink-democli  \
  --project my-project                              \
  "{\"name\":\"DemoApp\"}"

An example of avn service flink create-application output:

{
  "application_versions": [],
  "created_at": "2023-02-08T07:37:25.165996Z",
  "created_by": "wilma@example.com",
  "id": "2b29f4aa-a496-4fca-8575-23544415606e",
  "name": "DemoApp",
  "updated_at": "2023-02-08T07:37:25.165996Z",
  "updated_by": "wilma@example.com"
}

avn service flink list-applications#

Lists all the Aiven for Apache Flink® applications in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

Example: Lists all the Aiven for Flink applications for the service flink-democli in the project my-project.

avn service flink list-applications flink-democli \
  --project my-project

An example of avn service flink list-applications output:

{
  "applications": [
      {
          "created_at": "2023-02-08T07:37:25.165996Z",
          "created_by": "wilma@example.com",
          "id": "2b29f4aa-a496-4fca-8575-23544415606e",
          "name": "DemoApp",
          "updated_at": "2023-02-08T07:37:25.165996Z",
          "updated_by": "wilma@example.com"
      }
  ]
}

avn service flink get-application#

Retrieves the information about the Aiven for Flink® applications in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to retrieve information about.

Example: Retrieves information about Aiven for Flink® application with application-id 2b29f4aa-a496-4fca-8575-23544415606e for service flink-democli and project my-project

avn service flink get-application flink-democli \
  --project my-project                          \
  --application-id 2b29f4aa-a496-4fca-8575-23544415606e

An example of avn service flink list-applications output:

{
    "application_versions": [],
    "created_at": "2023-02-08T07:37:25.165996Z",
    "created_by": "wilma@example.com",
    "id": "2b29f4aa-a496-4fca-8575-23544415606e",
    "name": "DemoApp",
    "updated_at": "2023-02-08T07:37:25.165996Z",
    "updated_by": "wilma@example.com"
}

avn service flink update-application#

Update an Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to update

application-properties

Application properties definition for Aiven for Flink® application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_properties parameter should contain the following common properties in JSON format

Parameter

Information

name

The name of the application

Example: Updates the name of the Aiven for Flink application from Demo to DemoApp for application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 in the service flink-democli and project my-project.

avn  service flink update-application flink-democli     \
  --project my-project                                  \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 \
  "{\"name\":\"DemoApp\"}"

avn  service flink delete-application#

Delete an Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to delete

Example: Deletes the Aiven for Flink application with application-id 64192db8-d073-4e28-956b-82c71b016e3e for the service flink-democli in the project my-project.

avn  service flink delete-application flink-democli \
  --project my-project                              \
  --application-id 64192db8-d073-4e28-956b-82c71b016e3e

avn service flink create-application-version#

Create an Aiven for Flink® application version in a specified project and service.

Warning

Before creating an application, you need to create integrations between Aiven for Apache Flink and the source/sinks data services. As of now you can define integration with:

  • Aiven for Apache Kafka® as source/sink

  • Aiven for Apache PostgreSQL® as source/sink

  • Aiven for OpenSearch® as sink

Sinking data using the Slack connector, doesn’t need an integration.

Example: to create an integration between an Aiven for Apache Flink service named flink-democli and an Aiven for Apache Kafka service named demo-kafka you can use the following command:

avn service integration-create    \
  --integration-type flink        \
  --dest-service flink-democli    \
  --source-service demo-kafka

All the available command integration options can be found in the dedicated document

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to create a version

application_version_properties

Application version properties definition for Aiven for Flink® application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_version_properties parameter should contain the following common properties in JSON format:

Parameter

Information

sinks

An array of objects that contains the table creation statements creation statements of the sinks

create_table

A string that defines the CREATE TABLE statement of the sink including the integration ID. The integration ID can be found with the integration-list command

source

An array of objects that contains the table creation statements of the source

create_table

A string that defines the CREATE TABLE statement of the source including the integration ID. The integration ID can be found with the integration-list command

statement

The transformation SQL statement of the application

Example: Creates a new Aiven for Flink application version for application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 with the following details:

  • Source: a table, named special_orders coming from an Apache Kafka® topic named special_orders_topic using the integration with id 4ec23427-9e9f-4827-90fa-ea9e38c31bc3 and the following columns:

    id INT,
    name VARCHAR,
    topping VARCHAR
    
  • Sink: a table, called pizza_orders, writing to an Apache Kafka® topic named pizza_orders_topic using the integration with id 4ec23427-9e9f-4827-90fa-ea9e38c31bc3 and the following columns:

    id INT,
    name VARCHAR,
    topping VARCHAR
    
  • SQL statement:

    INSERT INTO special_orders
    SELECT id,
      name,
      c.topping
    FROM pizza_orders
      CROSS JOIN UNNEST(pizzas) b
      CROSS JOIN UNNEST(b.additionalToppings) AS c(topping)
    WHERE c.topping IN ('🍍 pineapple', '🍓 strawberry','🍌 banana')
    
avn service flink create-application-version flink-democli        \
  --project my-project                                            \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222           \
  """{
    \"sources\": [
      {
        \"create_table\":
          \"CREATE TABLE special_orders (                         \
              id INT,                                             \
              name VARCHAR,                                       \
              topping VARCHAR                                     \
              )                                                   \
            WITH (                                                \
              'connector' = 'kafka',                              \
              'properties.bootstrap.servers' = '',                \
              'scan.startup.mode' = 'earliest-offset',            \
              'value.fields-include' = 'ALL',                     \
              'topic' = 'special_orders_topic',                   \
              'value.format' = 'json'                             \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
      } ],
    \"sinks\": [
      {
        \"create_table\":
          \"CREATE TABLE pizza_orders (                                                   \
              id INT,                                                                     \
              shop VARCHAR,                                                               \
              name VARCHAR,                                                               \
              phoneNumber VARCHAR,                                                        \
              address VARCHAR,                                                            \
              pizzas ARRAY<ROW(pizzaName VARCHAR, additionalToppings ARRAY <VARCHAR>)>)   \
            WITH (                                                                        \
              'connector' = 'kafka',                                                      \
              'properties.bootstrap.servers' = '',                                        \
              'scan.startup.mode' = 'earliest-offset',                                    \
              'topic' = 'pizza_orders_topic',                                             \
              'value.format' = 'json'                                                     \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
        }
        ],
    \"statement\":
      \"INSERT INTO special_orders                                        \
        SELECT id,                                                        \
          name,                                                           \
          c.topping                                                       \
        FROM pizza_orders                                                 \
          CROSS JOIN UNNEST(pizzas) b                                     \
          CROSS JOIN UNNEST(b.additionalToppings) AS c(topping)           \
        WHERE c.topping IN ('🍍 pineapple', '🍓 strawberry','🍌 banana')\"
  }"""

avn service flink validate-application-version#

Validates the Aiven for Flink® application version in a specified project and service.

Warning

Before creating an application, you need to create integrations between Aiven for Apache Flink and the source/sinks data services. As of now you can define integration with:

  • Aiven for Apache Kafka® as source/sink

  • Aiven for Apache PostgreSQL® as source/sink

  • Aiven for OpenSearch® as sink

Sinking data using the Slack connector, doesn’t need an integration.

Example: to create an integration between an Aiven for Apache Flink service named flink-democli and an Aiven for Apache Kafka service named demo-kafka you can use the following command:

avn service integration-create    \
  --integration-type flink        \
  --dest-service flink-democli    \
  --source-service demo-kafka

All the available command integration options can be found in the dedicated document

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to create a version

application_version_properties

Application version properties definition for Aiven for Flink application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_version_properties parameter should contain the following common properties in JSON format

Parameter

Information

sinks

An array of objects that contains the table creation statements creation statements of the sinks

create_table

A string that defines the CREATE TABLE statement of the sink including the integration ID. The integration ID can be found with the integration-list command

source

An array of objects that contains the table creation statements of the source

create_table

A string that defines the CREATE TABLE statement of the source including the integration ID. The integration ID can be found with the integration-list command

statement

The transformation SQL statement of the application

Example: Validates the Aiven for Flink application version for the application-id 986b2d5f-7eda-480c-bcb3-0f903a866222.

avn service flink validate-application-version flink-democli        \
  --project my-project                                            \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222           \
  """{
    \"sources\": [
      {
        \"create_table\":
          \"CREATE TABLE special_orders (                         \
              id INT,                                             \
              name VARCHAR,                                       \
              topping VARCHAR                                     \
              )                                                   \
            WITH (                                                \
              'connector' = 'kafka',                              \
              'properties.bootstrap.servers' = '',                \
              'scan.startup.mode' = 'earliest-offset',            \
              'value.fields-include' = 'ALL',                     \
              'topic' = 'special_orders_topic',                   \
              'value.format' = 'json'                             \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
      } ],
    \"sinks\": [
      {
        \"create_table\":
          \"CREATE TABLE pizza_orders (                                                   \
              id INT,                                                                     \
              shop VARCHAR,                                                               \
              name VARCHAR,                                                               \
              phoneNumber VARCHAR,                                                        \
              address VARCHAR,                                                            \
              pizzas ARRAY<ROW(pizzaName VARCHAR, additionalToppings ARRAY <VARCHAR>)>)   \
            WITH (                                                                        \
              'connector' = 'kafka',                                                      \
              'properties.bootstrap.servers' = '',                                        \
              'scan.startup.mode' = 'earliest-offset',                                    \
              'topic' = 'pizza_orders_topic',                                             \
              'value.format' = 'json'                                                     \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
        }
        ],
    \"statement\":
      \"INSERT INTO special_orders                                        \
        SELECT id,                                                        \
          name,                                                           \
          c.topping                                                       \
        FROM pizza_orders                                                 \
          CROSS JOIN UNNEST(pizzas) b                                     \
          CROSS JOIN UNNEST(b.additionalToppings) AS c(topping)           \
        WHERE c.topping IN ('🍍 pineapple', '🍓 strawberry','🍌 banana')\"
  }"""

avn service flink get-application-version#

Retrieves information about a specific version of an Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

application-version-id

The ID of the Aiven for Flink application version to retrieve information about

Example: Retrieves the information specific to the Aiven for Flink® application for the service flink-demo-cli and project my-project with:

  • Application id: 986b2d5f-7eda-480c-bcb3-0f903a866222

  • Application version id: 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink get-application-version flink-democli \
  --project my-project                                  \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 \
  --application-version-id 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink delete-application-version#

Deletes a version of the Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

application-version-id

The ID of the Aiven for Flink application version to delete

Example: Delete the Aiven for Flink application version for service flink-demo-cli and project my-project with:

  • Application id: 986b2d5f-7eda-480c-bcb3-0f903a866222

  • Application version id: 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink delete-application-version flink-democli  \
  --project my-project                                      \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222     \
  --application-version-id 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink list-application-deployments#

Lists all the Aiven for Flink® application deployments in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

Example: Lists all the Aiven for Flink application deployments for application-id f171af72-fdf0-442c-947c-7f6a0efa83ad for the service flink-democli, in the project my-project.

avn service flink list-application-deployments flink-democli \
  --project my-project                                       \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad

avn service flink get-application-deployment#

Retrieves information about an Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment-id

The ID of the Aiven for Flink application deployment. This ID can be obtained from the output of the avn service flink list-application-deployments command

Example: Retrieves the details of the Aiven for Flink application deployment for the application-id f171af72-fdf0-442c-947c-7f6a0efa83ad, deployment-id bee0b5cb-01e7-49e6-bddb-a750caed4229 for the service flink-democli, in the project my-project.

avn service flink get-application-deployment flink-democli \
  --project my-project                                     \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad     \
  --deployment-id bee0b5cb-01e7-49e6-bddb-a750caed4229

avn service flink create-application-deployment#

Creates a new Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment_properties

The deployment properties definition for Aiven for Flink application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The deployment_properties parameter should contain the following common properties in JSON format

Parameter

Information

parallelism

The number of parallel instance for the task

restart_enabled

Specifies whether a Flink Job is restarted in case it fails

starting_savepoint

(Optional)The the savepoint from where you want to deploy.

version_id

The ID of the application version.

Example: Create a new Aiven for Flink application deployment for the application id 986b2d5f-7eda-480c-bcb3-0f903a866222.

avn service flink create-application-deployment  flink-democli  \
  --project my-project                                          \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222         \
  "{\"parallelism\": 1,\"restart_enabled\": true,  \"version_id\": \"7a1c6266-64da-4f6f-a8b0-75207f997c8d\"}"

avn service flink delete-application-deployment#

Deletes an Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink® application

deployment-id

The ID of the Aiven for Flink® application deployment to delete

Example: Deletes the Aiven for Flink application deployment with application-id f171af72-fdf0-442c-947c-7f6a0efa83ad and deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef.

avn service flink delete-application-deployment flink-democli   \
  --project my-project                                          \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad         \
  --deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef

avn service flink stop-application-deployment#

Stops a running Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment-id

The ID of the Aiven for Flink application deployment to stop

Example: Stops the Aiven for Flink application deployment with application-id f171af72-fdf0-442c-947c-7f6a0efa83ad and deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef.

avn service flink stop-application-deployment flink-democli   \
  --project my-project                                          \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad         \
  --deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef

avn service flink cancel-application-deployments#

Cancels an Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment-id

The ID of the Aiven for Flink application deployment to cancel

Example: Cancels the Aiven for Flink application deployment with application-id f171af72-fdf0-442c-947c-7f6a0efa83ad and deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef.

avn service flink cancel-application-deployments flink-democli   \
  --project my-project                                          \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad         \
  --deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef
Did you find this useful?

Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, and Cassandra are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. M3, M3 Aggregator, M3 Coordinator, OpenSearch, PostgreSQL, MySQL, InfluxDB, Grafana, Terraform, and Kubernetes are trademarks and property of their respective owners. *Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Aiven is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Aiven. All product and service names used in this website are for identification purposes only and do not imply endorsement.

Copyright © 2022, Aiven Team | Show Source | Last updated: February 2023
Contents
  • avn service flink
    • Manage Aiven for Apache Flink® applications
      • avn service flink create-application
      • avn service flink list-applications
      • avn service flink get-application
      • avn service flink update-application
      • avn  service flink delete-application
      • avn service flink create-application-version
      • avn service flink validate-application-version
      • avn service flink get-application-version
      • avn service flink delete-application-version
      • avn service flink list-application-deployments
      • avn service flink get-application-deployment
      • avn service flink create-application-deployment
      • avn service flink delete-application-deployment
      • avn service flink stop-application-deployment
      • avn service flink cancel-application-deployments