Use S3 API for MOSK Object Storage

In this section, discover how to harness the S3 API functionality within the OpenStack Object Storage environment.


Before you start using the S3 API, ensure you have the necessary prerequisites in place. This includes having access to an OpenStack deployment with the Object Storage service enabled and authenticated credentials.

Object Storage service enabled

Verify the presence of the object-store service within the OpenStack Identity service catalog. If the service is present, the following command returns endpoints related to the object-store service:

openstack catalog show object-store

If the object-store service is not present in the OpenStack Identity service catalog, consult your cloud operator to confirm that the Object Store service is enabled in the kind: OpenStackDeployment resource controlling your OpenStack installation. The following element must be present in the configuration:

kind: OpenStackDeployment
    - object-storage


The S3 API utilizes the AWS authorization protocol, which is not directly compatible with the OpenStack Identity service, aka Keystone, by default. To access the MOSK Object Storage service using the S3 API, you should create EC2 credentials within the OpenStack Identity service:

openstack ec2 credentials create -f yaml

Example output:

access: a354a74e0fa3434e8039d0425f7a0b59
project_id: 274b929c00b346c2ad0849d19d3e6f46
secret: d7c2ca9488dd4c8ab3cff2f1aad1c683
trust_id: null
user_id: 801b9014d3d441478bf0ccac30b80459

When accessing the Object Storage service through the S3 API, take note of the access and secret fields. These values serve as respective equivalents for the access_key and secret_access_key options, or similarly named parameters, within the S3-specific tools.

Obtaining the S3 endpoint

When using the Object Storage service endpoint, exclude the final /swift/v1/... section.

To obtain the endpoint:

openstack versions show --service object-store --status CURRENT \
    --interface public --region <desired region> \
    -c Endpoint -f value | sed 's/\/swift\/.*$//'

Example output:

S3-specific tools configuration

To interact seamlessly with OpenStack Object Storage through the S3 API, familiarize yourself with essential S3-specific tools, such as s3cmd, the AWS Command Line Interface (CLI), and Boto3 SDK for Python.

This section provides concise yet comprehensive configuration examples for utilizing these S3-specific tools allowing users to interact with the Amazon S3 and other cloud storage providers employing the S3 protocol.


S3cmd is a free command-line client designed for uploading, retrieving, and managing data across various cloud storage service providers that utilize the S3 protocol, including Amazon S3.

Example of a minimal s3cfg configuration:

# use 'access' value from "openstack ec2 credentials create"
access_key = a354a74e0fa3434e8039d0425f7a0b59
# use 'secret' value from "openstack ec2 credentials create"
secret_key = d7c2ca9488dd4c8ab3cff2f1aad1c683
# use hostname of the "openstack-store" service, without protocol
host_base =
# important, leave empty
host_bucket =

When configured, you can use s3cmd as usual:

s3cmd -c s3cfg ls                                           # list buckets
s3cmd -c s3cfg mb s3://my-bucket                            # create a bucket
s3cmd -c s3cfg put myfile.txt s3://my-bucket                # upload file to bucket
s3cmd -c s3cfg get s3://my-bucket/myfile.txt myfile2.txt    # download file
s3cmd -c s3cfg rm s3://my-bucket/myfile.txt                 # delete file from bucket
s3cmd -c s3cfg rb s3://my-bucket                            # delete bucket


The AWS CLI stands as the official and powerful command-line interface provided by Amazon Web Services (AWS). It serves as a versatile tool that enables users to interact with AWS services directly from the command line. Offering a wide range of functionalities, the AWS CLI facilitates diverse operations, including but not limited to resource provisioning, configuration management, deployment, and monitoring across various AWS services.

To start using the AWS CLI:

  1. Set the authorization values as shell variables:

    # use "access" field from created ec2 credentials
    export AWS_ACCESS_KEY_ID=a354a74e0fa3434e8039d0425f7a0b59
    # use "secret" field from created ec2 credentials
    export AWS_SECRET_ACCESS_KEY=a354a74e0fa3434e8039d0425f7a0b59
  2. Explicitly provide the --endpoint-url set to the endpoint of the openstack-store service to every aws CLI command:

    export S3_API_URL=
    aws --endpoint-url $S3_API_URL s3 mb s3://my-bucket
    aws --endpoint-url $S3_API_URL s3 cp myfile.txt s3://my-bucket
    aws --endpoint-url $S3_API_URL s3 ls s3://my-bucket
    aws --endpoint-url $S3_API_URL s3 rm s3://my-bucket/myfile.txt
    aws --endpoint-url $S3_API_URL s3 rm s3://my-bucket


Boto3 is the official Python3 SDK (Software Development Kit) specifically designed for Amazon Web Services (AWS), providing comprehensive support for various AWS services, including the S3 API for object storage. It offers extensive functionality and tools for developers to interact programmatically with AWS services, facilitating tasks such as managing, accessing, and manipulating data stored in Amazon S3 buckets.

Presuming that you have configured the environment with the same environment variables as in the example for AWS CLI, you can create an S3 client in Python as follows:

import boto3, os

# high level "resource" interface
s3 = boto3.resources("s3", endpoint_url=os.getenv("S3_API_URL"))
for bucket in s3.buckets.all():  # returns rich objects

# low level "client" interface
s3 = boto3.client("s3", endpoint_url=os.getenv("S3_API_URL"))
buckets = s3.list_buckets()  # returns raw JSON-like dictionaries