Skip to main content

Azure Blob Connector

The Azure Blob connector enables AxonFlow agents to interact with Azure Blob Storage for storing and retrieving unstructured data.

Overview

PropertyValue
Typeazureblob
EditionCommunity
Auth MethodsAccount Key, Connection String, Managed Identity, SAS Token
Capabilitiesquery, execute, presign, streaming

Use Cases

  • Store documents for RAG pipelines in Azure environments
  • Archive agent outputs and generated reports
  • Access data from Azure Data Lake Storage Gen2
  • Integrate with Azure-native applications

Configuration

Environment Variables

# Required
MCP_azureblob_storage_ACCOUNT_NAME="mystorageaccount"
MCP_azureblob_storage_DEFAULT_CONTAINER="mycontainer"

# Authentication (choose one method)

# Option 1: Account Key
MCP_azureblob_storage_ACCOUNT_KEY="base64encodedkey..."

# Option 2: Connection String
MCP_azureblob_storage_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=...;AccountKey=...;EndpointSuffix=core.windows.net"

# Option 3: Managed Identity (recommended for Azure deployments)
MCP_azureblob_storage_USE_MANAGED_IDENTITY="true"
# Optional: specify client ID for user-assigned managed identity
MCP_azureblob_storage_CLIENT_ID="12345678-1234-1234-1234-123456789012"

# Option 4: SAS Token
MCP_azureblob_storage_SAS_TOKEN="sv=2021-06-08&ss=b&srt=sco&sp=rwdlacitfx..."

# Optional
MCP_azureblob_storage_ENDPOINT="https://mystorageaccount.blob.core.windows.net"
MCP_azureblob_storage_TIMEOUT="30s"

Connector Config (Customer Portal)

{
"name": "azure-documents",
"type": "azureblob",
"options": {
"account_name": "mystorageaccount",
"default_container": "documents"
},
"credentials": {
"account_key": "base64encodedkey..."
}
}

Operations

Query Operations

List Blobs

curl -X POST https://your-axonflow.example.com/mcp/resources/query \
-H "Content-Type: application/json" \
-d '{
"connector": "azure-documents",
"statement": "list_blobs",
"parameters": {
"container": "documents",
"prefix": "reports/",
"max_results": 100
}
}'

Response:

{
"rows": [
{
"name": "reports/monthly-report.pdf",
"size": 204800,
"last_modified": "2025-12-07T10:30:00Z",
"content_type": "application/pdf",
"etag": "0x8D..."
}
],
"metadata": {
"container": "documents"
}
}

Get Blob

curl -X POST https://your-axonflow.example.com/mcp/resources/query \
-d '{
"connector": "azure-documents",
"statement": "get_blob",
"parameters": {
"container": "documents",
"blob": "reports/monthly-report.pdf"
}
}'

Get Blob Properties

curl -X POST https://your-axonflow.example.com/mcp/resources/query \
-d '{
"connector": "azure-documents",
"statement": "get_blob_properties",
"parameters": {
"blob": "reports/monthly-report.pdf"
}
}'

Generate SAS URL

curl -X POST https://your-axonflow.example.com/mcp/resources/query \
-d '{
"connector": "azure-documents",
"statement": "generate_sas",
"parameters": {
"blob": "reports/monthly-report.pdf",
"permissions": "r",
"expires_in": 3600
}
}'

List Containers

curl -X POST https://your-axonflow.example.com/mcp/resources/query \
-d '{
"connector": "azure-documents",
"statement": "list_containers",
"parameters": {
"prefix": "prod-"
}
}'

Execute Operations

Upload Blob

curl -X POST https://your-axonflow.example.com/mcp/tools/execute \
-d '{
"connector": "azure-documents",
"action": "upload_blob",
"parameters": {
"container": "documents",
"blob": "uploads/new-file.txt",
"body": "File content here",
"content_type": "text/plain"
}
}'

Delete Blob

curl -X POST https://your-axonflow.example.com/mcp/tools/execute \
-d '{
"connector": "azure-documents",
"action": "delete_blob",
"parameters": {
"blob": "uploads/old-file.txt"
}
}'

Copy Blob

curl -X POST https://your-axonflow.example.com/mcp/tools/execute \
-d '{
"connector": "azure-documents",
"action": "copy_blob",
"parameters": {
"source_container": "source",
"source_blob": "original.pdf",
"dest_container": "dest",
"dest_blob": "copy.pdf"
}
}'

Create Container

curl -X POST https://your-axonflow.example.com/mcp/tools/execute \
-d '{
"connector": "azure-documents",
"action": "create_container",
"parameters": {
"container": "new-container",
"public_access": "none"
}
}'

Delete Container

curl -X POST https://your-axonflow.example.com/mcp/tools/execute \
-d '{
"connector": "azure-documents",
"action": "delete_container",
"parameters": {
"container": "old-container"
}
}'

Authentication Methods

For Azure VM, AKS, or App Service deployments:

MCP_azureblob_storage_USE_MANAGED_IDENTITY="true"
MCP_azureblob_storage_ACCOUNT_NAME="mystorageaccount"

Required Azure RBAC Role:

  • Storage Blob Data Contributor for read/write
  • Storage Blob Data Reader for read-only

Service Principal

For non-Azure deployments or specific identity requirements:

MCP_azureblob_storage_TENANT_ID="your-tenant-id"
MCP_azureblob_storage_CLIENT_ID="your-client-id"
MCP_azureblob_storage_CLIENT_SECRET="your-client-secret"

Best Practices

Security

  1. Use Managed Identity in Azure deployments (no keys to manage)
  2. Scope permissions with container-level access policies
  3. Enable soft delete for accidental deletion protection
  4. Use private endpoints for VNet-integrated deployments

Performance

  1. Use hot tier for frequently accessed data
  2. Enable CDN for static content serving
  3. Use append blobs for log-style data

Example Azure RBAC Assignment

# Assign Storage Blob Data Contributor to managed identity
az role assignment create \
--assignee <managed-identity-object-id> \
--role "Storage Blob Data Contributor" \
--scope /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<account>

Azure Data Lake Storage Gen2 Compatibility

The Azure Blob connector is compatible with ADLS Gen2 (Azure Data Lake Storage Gen2) accounts that have hierarchical namespace enabled. ADLS Gen2 provides file system semantics (directories, atomic rename operations) on top of blob storage.

Configuration for ADLS Gen2

MCP_azureblob_adls_ACCOUNT_NAME="myadlsaccount"
MCP_azureblob_adls_USE_MANAGED_IDENTITY="true"
# ADLS Gen2 uses the DFS endpoint instead of the blob endpoint
MCP_azureblob_adls_ENDPOINT="https://myadlsaccount.dfs.core.windows.net"

Key differences from standard Blob Storage:

  • ADLS Gen2 supports true directory operations (create, rename, delete directories)
  • The connector uses the DFS endpoint (dfs.core.windows.net) rather than the blob endpoint
  • Hierarchical namespace accounts support POSIX-style ACLs in addition to Azure RBAC
  • For ADLS Gen2, the Storage Blob Data Contributor role is required at minimum; Storage Blob Data Owner is needed to set ACLs

Storage Tiers

Azure Blob Storage supports multiple access tiers for cost optimization:

TierUse CaseAccess LatencyCost
HotFrequently accessed dataMillisecondsHigher storage, lower access
CoolInfrequently accessed (30+ days)MillisecondsLower storage, higher access
ColdRarely accessed (90+ days)MillisecondsLower storage, higher access
ArchiveLong-term backup/complianceHours (rehydration required)Lowest storage, highest access

The connector reads and writes to the blob's current tier. Tier changes (e.g., moving to Cool or Archive) must be managed through the Azure Portal, CLI, or lifecycle management policies. Blobs in the Archive tier must be rehydrated before they can be read through the connector.

Troubleshooting

AuthorizationFailure

  • Verify RBAC role assignment is complete
  • Check managed identity is enabled on the resource
  • Ensure storage account allows access from your network

ContainerNotFound

  • Verify container name is correct
  • Check if container exists in the storage account
  • Ensure proper permissions on the container

Connection Timeout

  • Check network connectivity to Azure endpoint
  • Verify firewall rules allow access
  • Use private endpoints for VNet deployments