Skip to main content

Configuration Reference

Complete reference guide for all PowerMem configuration options. This document provides detailed explanations for every configuration parameter in env.example.

Configuration Methods

PowerMem supports two configuration methods:

  1. Environment Variables (.env file) - Recommended for most use cases
  2. JSON/Dictionary Configuration - Useful for programmatic configuration

Method 1: Environment Variables

Create a .env file in your project root and configure using environment variables. See the examples in each section below.

from powermem import Memory, auto_config

# Load configuration (auto-loads from .env or uses defaults)
config = auto_config()

# Create memory instance
memory = Memory(config=config)

Method 2: JSON/Dictionary Configuration

Pass configuration as a Python dictionary (JSON-like format). This is useful when:

  • Loading configuration from a JSON file
  • Programmatically generating configuration
  • Embedding configuration in application code
from powermem import Memory

config = {
'vector_store': {
'provider': 'sqlite',
'config': {
'database_path': './data/powermem_dev.db'
}
},
'llm': {
'provider': 'qwen',
'config': {
'api_key': 'your_api_key',
'model': 'qwen-plus'
}
},
'embedder': {
'provider': 'qwen',
'config': {
'api_key': 'your_api_key',
'model': 'text-embedding-v4'
}
}
}

memory = Memory(config=config)

Loading from JSON File

You can also load configuration from a JSON file:

import json
from powermem import Memory

# Load from JSON file
with open('config.json', 'r') as f:
config = json.load(f)

memory = Memory(config=config)

Table of Contents

  1. Database Configuration
  2. LLM Configuration
  3. Embedding Configuration
  4. Agent Configuration
  5. Intelligent Memory Configuration
  6. Performance Configuration
  7. Security Configuration
  8. Telemetry Configuration
  9. Audit Configuration
  10. Logging Configuration

1. Database Configuration (Required)

PowerMem requires a database provider to store memories and vectors. Choose one of the supported providers: SQLite (development), OceanBase (production), or PostgreSQL.

Common Database Settings

ConfigurationTypeRequiredDefaultDescription
DATABASE_PROVIDERstringYessqliteDatabase provider to use. Options: sqlite, oceanbase, postgres

SQLite Configuration

SQLite is the default database provider, recommended for development and single-user applications.

ConfigurationTypeRequiredDefaultDescription
SQLITE_PATHstringYes*./data/powermem_dev.dbPath to the SQLite database file. Required when DATABASE_PROVIDER=sqlite
SQLITE_ENABLE_WALbooleanNotrueEnable Write-Ahead Logging (WAL) mode for better concurrency
SQLITE_TIMEOUTintegerNo30Connection timeout in seconds

Environment Variables Example:

DATABASE_PROVIDER=sqlite
SQLITE_PATH=./data/powermem_dev.db
SQLITE_ENABLE_WAL=true
SQLITE_TIMEOUT=30

JSON Configuration Example:

{
"vector_store": {
"provider": "sqlite",
"config": {
"database_path": "./data/powermem_dev.db",
"enable_wal": true,
"timeout": 30
}
}
}

Python Dictionary Example:

config = {
'vector_store': {
'provider': 'sqlite',
'config': {
'database_path': './data/powermem_dev.db',
'enable_wal': True,
'timeout': 30
}
}
}

OceanBase Configuration

OceanBase is recommended for production deployments and enterprise applications with high-scale requirements.

ConfigurationTypeRequiredDefaultDescription
OCEANBASE_HOSTstringYes*127.0.0.1OceanBase server hostname or IP address. Required when DATABASE_PROVIDER=oceanbase
OCEANBASE_PORTintegerYes*2881OceanBase server port. Required when DATABASE_PROVIDER=oceanbase
OCEANBASE_USERstringYes*rootDatabase username. Required when DATABASE_PROVIDER=oceanbase
OCEANBASE_PASSWORDstringYes*-Database password. Required when DATABASE_PROVIDER=oceanbase
OCEANBASE_DATABASEstringYes*powermemDatabase name. Required when DATABASE_PROVIDER=oceanbase
OCEANBASE_COLLECTIONstringNomemoriesCollection/table name for storing memories
OCEANBASE_INDEX_TYPEstringNoIVF_FLATVector index type. Options: IVF_FLAT, HNSW, etc.
OCEANBASE_VECTOR_METRIC_TYPEstringNocosineVector similarity metric. Options: cosine, euclidean, dot_product
OCEANBASE_TEXT_FIELDstringNodocumentField name for storing text content
OCEANBASE_VECTOR_FIELDstringNoembeddingField name for storing vector embeddings
OCEANBASE_EMBEDDING_MODEL_DIMSintegerYes*1536Vector dimensions. Must match your embedding model dimensions. Required when DATABASE_PROVIDER=oceanbase
OCEANBASE_PRIMARY_FIELDstringNoidPrimary key field name
OCEANBASE_METADATA_FIELDstringNometadataField name for storing metadata
OCEANBASE_VIDX_NAMEstringNomemories_vidxVector index name

Environment Variables Example:

DATABASE_PROVIDER=oceanbase
OCEANBASE_HOST=127.0.0.1
OCEANBASE_PORT=2881
OCEANBASE_USER=root
OCEANBASE_PASSWORD=your_password
OCEANBASE_DATABASE=powermem
OCEANBASE_COLLECTION=memories
OCEANBASE_INDEX_TYPE=IVF_FLAT
OCEANBASE_VECTOR_METRIC_TYPE=cosine
OCEANBASE_EMBEDDING_MODEL_DIMS=1536

JSON Configuration Example:

{
"vector_store": {
"provider": "oceanbase",
"config": {
"collection_name": "memories",
"connection_args": {
"host": "127.0.0.1",
"port": 2881,
"user": "root",
"password": "your_password",
"db_name": "powermem"
},
"vidx_metric_type": "cosine",
"index_type": "IVF_FLAT",
"embedding_model_dims": 1536,
"primary_field": "id",
"vector_field": "embedding",
"text_field": "document",
"metadata_field": "metadata",
"vidx_name": "memories_vidx"
}
}
}

Python Dictionary Example:

config = {
'vector_store': {
'provider': 'oceanbase',
'config': {
'collection_name': 'memories',
'connection_args': {
'host': '127.0.0.1',
'port': 2881,
'user': 'root',
'password': 'your_password',
'db_name': 'powermem'
},
'vidx_metric_type': 'cosine',
'index_type': 'IVF_FLAT',
'embedding_model_dims': 1536
}
}
}

PostgreSQL Configuration

PostgreSQL with pgvector extension is supported for vector storage.

ConfigurationTypeRequiredDefaultDescription
POSTGRES_HOSTstringYes*127.0.0.1PostgreSQL server hostname or IP address. Required when DATABASE_PROVIDER=postgres
POSTGRES_PORTintegerYes*5432PostgreSQL server port. Required when DATABASE_PROVIDER=postgres
POSTGRES_USERstringYes*postgresDatabase username. Required when DATABASE_PROVIDER=postgres
POSTGRES_PASSWORDstringYes*-Database password. Required when DATABASE_PROVIDER=postgres
POSTGRES_DATABASEstringYes*powermemDatabase name. Required when DATABASE_PROVIDER=postgres
DATABASE_SSLMODEstringNopreferSSL connection mode. Options: disable, allow, prefer, require, verify-ca, verify-full
DATABASE_POOL_SIZEintegerNo10Connection pool size
DATABASE_MAX_OVERFLOWintegerNo20Maximum overflow connections in the pool

Environment Variables Example:

DATABASE_PROVIDER=postgres
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your_password
POSTGRES_DATABASE=powermem
POSTGRES_COLLECTION=memories
DATABASE_SSLMODE=prefer
DATABASE_POOL_SIZE=10
DATABASE_MAX_OVERFLOW=20

JSON Configuration Example:

{
"vector_store": {
"provider": "postgres",
"config": {
"collection_name": "memories",
"dbname": "powermem",
"host": "127.0.0.1",
"port": 5432,
"user": "postgres",
"password": "your_password",
"embedding_model_dims": 1536,
"diskann": true,
"hnsw": true
}
}
}

Python Dictionary Example:

config = {
'vector_store': {
'provider': 'postgres',
'config': {
'collection_name': 'memories',
'dbname': 'powermem',
'host': '127.0.0.1',
'port': 5432,
'user': 'postgres',
'password': 'your_password',
'embedding_model_dims': 1536
}
}
}

2. LLM Configuration (Required)

PowerMem requires an LLM provider for memory generation and retrieval. Choose from Qwen, OpenAI, or Mock (for testing).

Common LLM Settings

ConfigurationTypeRequiredDefaultDescription
LLM_PROVIDERstringYesqwenLLM provider to use. Options: qwen, openai, mock

Qwen Configuration (Default)

Qwen is the default LLM provider, powered by Alibaba Cloud DashScope.

ConfigurationTypeRequiredDefaultDescription
LLM_API_KEYstringYes*-DashScope API key. Required when LLM_PROVIDER=qwen
LLM_MODELstringNoqwen-plusQwen model name. Options: qwen-plus, qwen-max, qwen-turbo, qwen-long, etc.
QWEN_LLM_BASE_URLstringNohttps://dashscope.aliyuncs.com/api/v1API base URL for DashScope
LLM_TEMPERATUREfloatNo0.7Sampling temperature (0.0-2.0). Higher values make output more random
LLM_MAX_TOKENSintegerNo1000Maximum number of tokens to generate
LLM_TOP_PfloatNo0.8Nucleus sampling parameter (0.0-1.0). Controls diversity of output
LLM_TOP_KintegerNo50Top-K sampling parameter. Limits sampling to top K tokens
LLM_ENABLE_SEARCHbooleanNofalseEnable web search capability (if supported by model)

Environment Variables Example:

LLM_PROVIDER=qwen
LLM_API_KEY=your_api_key_here
LLM_MODEL=qwen-plus
QWEN_LLM_BASE_URL=https://dashscope.aliyuncs.com/api/v1
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=1000
LLM_TOP_P=0.8
LLM_TOP_K=50
LLM_ENABLE_SEARCH=false

JSON Configuration Example:

{
"llm": {
"provider": "qwen",
"config": {
"api_key": "your_api_key_here",
"model": "qwen-plus",
"dashscope_base_url": "https://dashscope.aliyuncs.com/api/v1",
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 0.8,
"top_k": 50,
"enable_search": false
}
}
}

Python Dictionary Example:

config = {
'llm': {
'provider': 'qwen',
'config': {
'api_key': 'your_api_key_here',
'model': 'qwen-plus',
'dashscope_base_url': 'https://dashscope.aliyuncs.com/api/v1',
'temperature': 0.7,
'max_tokens': 1000,
'top_p': 0.8,
'top_k': 50,
'enable_search': False
}
}
}

OpenAI Configuration

OpenAI GPT models are supported.

ConfigurationTypeRequiredDefaultDescription
LLM_API_KEYstringYes*-OpenAI API key. Required when LLM_PROVIDER=openai
LLM_MODELstringNogpt-4OpenAI model name. Options: gpt-4, gpt-4-turbo, gpt-3.5-turbo, etc.
OPENAI_LLM_BASE_URLstringNohttps://api.openai.com/v1API base URL for OpenAI
LLM_TEMPERATUREfloatNo0.7Sampling temperature (0.0-2.0)
LLM_MAX_TOKENSintegerNo1000Maximum number of tokens to generate
LLM_TOP_PfloatNo1.0Nucleus sampling parameter (0.0-1.0)

Environment Variables Example:

LLM_PROVIDER=openai
LLM_API_KEY=your-openai-api-key
LLM_MODEL=gpt-4
OPENAI_LLM_BASE_URL=https://api.openai.com/v1
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=1000
LLM_TOP_P=1.0

JSON Configuration Example:

{
"llm": {
"provider": "openai",
"config": {
"api_key": "your-openai-api-key",
"model": "gpt-4",
"openai_base_url": "https://api.openai.com/v1",
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 1.0
}
}
}

Python Dictionary Example:

config = {
'llm': {
'provider': 'openai',
'config': {
'api_key': 'your-openai-api-key',
'model': 'gpt-4',
'openai_base_url': 'https://api.openai.com/v1',
'temperature': 0.7,
'max_tokens': 1000,
'top_p': 1.0
}
}
}

3. Embedding Configuration (Required)

PowerMem requires an embedding provider to convert text into vector embeddings for similarity search.

Common Embedding Settings

ConfigurationTypeRequiredDefaultDescription
EMBEDDING_PROVIDERstringYesqwenEmbedding provider to use. Options: qwen, openai, mock

Qwen Embedding Configuration (Default)

Qwen embeddings are provided by Alibaba Cloud DashScope.

ConfigurationTypeRequiredDefaultDescription
EMBEDDING_API_KEYstringYes*-DashScope API key. Required when EMBEDDING_PROVIDER=qwen
EMBEDDING_MODELstringNotext-embedding-v4Qwen embedding model name
EMBEDDING_DIMSintegerYes*1536Vector dimensions. Must match DATABASE_EMBEDDING_MODEL_DIMS if using OceanBase. Required when EMBEDDING_PROVIDER=qwen
QWEN_EMBEDDING_BASE_URLstringNohttps://dashscope.aliyuncs.com/api/v1API base URL for DashScope

Environment Variables Example:

EMBEDDING_PROVIDER=qwen
EMBEDDING_API_KEY=your_api_key_here
EMBEDDING_MODEL=text-embedding-v4
EMBEDDING_DIMS=1536
QWEN_EMBEDDING_BASE_URL=https://dashscope.aliyuncs.com/api/v1

JSON Configuration Example:

{
"embedder": {
"provider": "qwen",
"config": {
"api_key": "your_api_key_here",
"model": "text-embedding-v4",
"embedding_dims": 1536
}
}
}

Python Dictionary Example:

config = {
'embedder': {
'provider': 'qwen',
'config': {
'api_key': 'your_api_key_here',
'model': 'text-embedding-v4',
'embedding_dims': 1536
}
}
}

OpenAI Embedding Configuration

OpenAI provides text embedding models.

ConfigurationTypeRequiredDefaultDescription
EMBEDDING_API_KEYstringYes*-OpenAI API key. Required when EMBEDDING_PROVIDER=openai
EMBEDDING_MODELstringNotext-embedding-ada-002OpenAI embedding model name. Options: text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large
EMBEDDING_DIMSintegerYes*1536Vector dimensions. Varies by model (ada-002: 1536, 3-small: 1536, 3-large: 3072). Required when EMBEDDING_PROVIDER=openai
OPEN_EMBEDDING_BASE_URLstringNohttps://api.openai.com/v1API base URL for OpenAI

Environment Variables Example:

EMBEDDING_PROVIDER=openai
EMBEDDING_API_KEY=your-openai-api-key
EMBEDDING_MODEL=text-embedding-ada-002
EMBEDDING_DIMS=1536
OPEN_EMBEDDING_BASE_URL=https://api.openai.com/v1

JSON Configuration Example:

{
"embedder": {
"provider": "openai",
"config": {
"api_key": "your-openai-api-key",
"model": "text-embedding-ada-002",
"embedding_dims": 1536
}
}
}

Python Dictionary Example:

config = {
'embedder': {
'provider': 'openai',
'config': {
'api_key': 'your-openai-api-key',
'model': 'text-embedding-ada-002',
'embedding_dims': 1536
}
}
}

4. Agent Configuration (Optional)

Agent configuration controls how PowerMem manages memory for AI agents.

ConfigurationTypeRequiredDefaultDescription
AGENT_ENABLEDbooleanNotrueEnable agent memory management
AGENT_DEFAULT_SCOPEstringNoAGENTDefault scope for agent memories. Options: AGENT, USER, GLOBAL
AGENT_DEFAULT_PRIVACY_LEVELstringNoPRIVATEDefault privacy level. Options: PRIVATE, PUBLIC, SHARED
AGENT_DEFAULT_COLLABORATION_LEVELstringNoREAD_ONLYDefault collaboration level. Options: READ_ONLY, READ_WRITE, FULL
AGENT_DEFAULT_ACCESS_PERMISSIONstringNoOWNER_ONLYDefault access permission. Options: OWNER_ONLY, AUTHORIZED, PUBLIC
AGENT_MEMORY_MODEstringNoautoAgent memory mode. Options: auto, multi_agent, multi_user, hybrid

Environment Variables Example:

AGENT_ENABLED=true
AGENT_DEFAULT_SCOPE=AGENT
AGENT_DEFAULT_PRIVACY_LEVEL=PRIVATE
AGENT_DEFAULT_COLLABORATION_LEVEL=READ_ONLY
AGENT_DEFAULT_ACCESS_PERMISSION=OWNER_ONLY
AGENT_MEMORY_MODE=auto

JSON Configuration Example:

{
"agent_memory": {
"enabled": true,
"mode": "auto",
"default_scope": "AGENT",
"default_privacy_level": "PRIVATE",
"default_collaboration_level": "READ_ONLY",
"default_access_permission": "OWNER_ONLY"
}
}

Python Dictionary Example:

config = {
'agent_memory': {
'enabled': True,
'mode': 'auto',
'default_scope': 'AGENT',
'default_privacy_level': 'PRIVATE',
'default_collaboration_level': 'READ_ONLY',
'default_access_permission': 'OWNER_ONLY'
}
}

5. Intelligent Memory Configuration (Optional)

Intelligent memory uses the Ebbinghaus forgetting curve to manage memory retention and decay.

Ebbinghaus Forgetting Curve Settings

ConfigurationTypeRequiredDefaultDescription
INTELLIGENT_MEMORY_ENABLEDbooleanNotrueEnable intelligent memory management
INTELLIGENT_MEMORY_INITIAL_RETENTIONfloatNo1.0Initial retention score (0.0-1.0). Starting memory strength
INTELLIGENT_MEMORY_DECAY_RATEfloatNo0.1Memory decay rate (0.0-1.0). Higher values mean faster forgetting
INTELLIGENT_MEMORY_REINFORCEMENT_FACTORfloatNo0.3Reinforcement factor (0.0-1.0). How much memory strengthens when accessed
INTELLIGENT_MEMORY_WORKING_THRESHOLDfloatNo0.3Working memory threshold (0.0-1.0). Memories below this are in working memory
INTELLIGENT_MEMORY_SHORT_TERM_THRESHOLDfloatNo0.6Short-term memory threshold (0.0-1.0). Memories between working and this are short-term
INTELLIGENT_MEMORY_LONG_TERM_THRESHOLDfloatNo0.8Long-term memory threshold (0.0-1.0). Memories above this are long-term

Memory Decay Calculation Settings

ConfigurationTypeRequiredDefaultDescription
MEMORY_DECAY_ENABLEDbooleanNotrueEnable memory decay calculations
MEMORY_DECAY_ALGORITHMstringNoebbinghausDecay algorithm to use. Options: ebbinghaus
MEMORY_DECAY_BASE_RETENTIONfloatNo1.0Base retention score (0.0-1.0)
MEMORY_DECAY_FORGETTING_RATEfloatNo0.1Forgetting rate (0.0-1.0)
MEMORY_DECAY_REINFORCEMENT_FACTORfloatNo0.3Reinforcement factor for decay calculations (0.0-1.0)

Environment Variables Example:

INTELLIGENT_MEMORY_ENABLED=true
INTELLIGENT_MEMORY_INITIAL_RETENTION=1.0
INTELLIGENT_MEMORY_DECAY_RATE=0.1
INTELLIGENT_MEMORY_REINFORCEMENT_FACTOR=0.3
INTELLIGENT_MEMORY_WORKING_THRESHOLD=0.3
INTELLIGENT_MEMORY_SHORT_TERM_THRESHOLD=0.6
INTELLIGENT_MEMORY_LONG_TERM_THRESHOLD=0.8
MEMORY_DECAY_ENABLED=true
MEMORY_DECAY_ALGORITHM=ebbinghaus
MEMORY_DECAY_BASE_RETENTION=1.0
MEMORY_DECAY_FORGETTING_RATE=0.1
MEMORY_DECAY_REINFORCEMENT_FACTOR=0.3

JSON Configuration Example:

{
"intelligent_memory": {
"enabled": true,
"initial_retention": 1.0,
"decay_rate": 0.1,
"reinforcement_factor": 0.3,
"working_threshold": 0.3,
"short_term_threshold": 0.6,
"long_term_threshold": 0.8
}
}

Python Dictionary Example:

config = {
'intelligent_memory': {
'enabled': True,
'initial_retention': 1.0,
'decay_rate': 0.1,
'reinforcement_factor': 0.3,
'working_threshold': 0.3,
'short_term_threshold': 0.6,
'long_term_threshold': 0.8
}
}

6. Performance Configuration (Optional)

Performance settings control batch sizes, caching, and search parameters.

Memory Management Settings

ConfigurationTypeRequiredDefaultDescription
MEMORY_BATCH_SIZEintegerNo100Number of memories to process in a single batch
MEMORY_CACHE_SIZEintegerNo1000Maximum number of memories to cache in memory
MEMORY_CACHE_TTLintegerNo3600Cache time-to-live in seconds
MEMORY_SEARCH_LIMITintegerNo10Maximum number of results to return from memory search
MEMORY_SEARCH_THRESHOLDfloatNo0.7Minimum similarity threshold for memory search (0.0-1.0)

Vector Store Settings

ConfigurationTypeRequiredDefaultDescription
VECTOR_STORE_BATCH_SIZEintegerNo50Number of vectors to process in a single batch
VECTOR_STORE_CACHE_SIZEintegerNo500Maximum number of vectors to cache
VECTOR_STORE_INDEX_REBUILD_INTERVALintegerNo86400Vector index rebuild interval in seconds (24 hours)

Environment Variables Example:

MEMORY_BATCH_SIZE=100
MEMORY_CACHE_SIZE=1000
MEMORY_CACHE_TTL=3600
MEMORY_SEARCH_LIMIT=10
MEMORY_SEARCH_THRESHOLD=0.7
VECTOR_STORE_BATCH_SIZE=50
VECTOR_STORE_CACHE_SIZE=500
VECTOR_STORE_INDEX_REBUILD_INTERVAL=86400

Note: Performance settings are typically configured through environment variables. JSON configuration for these settings may vary based on implementation. Check the specific API documentation for programmatic configuration options.


7. Security Configuration (Optional)

Security settings control encryption and access control.

Encryption Settings

ConfigurationTypeRequiredDefaultDescription
ENCRYPTION_ENABLEDbooleanNofalseEnable encryption for stored memories
ENCRYPTION_KEYstringYes*-Encryption key. Required when ENCRYPTION_ENABLED=true. Should be a secure random string
ENCRYPTION_ALGORITHMstringNoAES-256-GCMEncryption algorithm to use. Options: AES-256-GCM

Access Control Settings

ConfigurationTypeRequiredDefaultDescription
ACCESS_CONTROL_ENABLEDbooleanNotrueEnable access control for memories
ACCESS_CONTROL_DEFAULT_PERMISSIONstringNoREAD_ONLYDefault permission level. Options: READ_ONLY, READ_WRITE, FULL
ACCESS_CONTROL_ADMIN_USERSstringNoadmin,rootComma-separated list of admin usernames

Environment Variables Example:

ENCRYPTION_ENABLED=false
ENCRYPTION_KEY=
ENCRYPTION_ALGORITHM=AES-256-GCM
ACCESS_CONTROL_ENABLED=true
ACCESS_CONTROL_DEFAULT_PERMISSION=READ_ONLY
ACCESS_CONTROL_ADMIN_USERS=admin,root

Note: Security settings are typically configured through environment variables. JSON configuration for these settings may vary based on implementation.


8. Telemetry Configuration (Optional)

Telemetry settings control usage analytics and monitoring.

ConfigurationTypeRequiredDefaultDescription
TELEMETRY_ENABLEDbooleanNofalseEnable telemetry data collection
TELEMETRY_ENDPOINTstringNohttps://telemetry.powermem.aiTelemetry endpoint URL
TELEMETRY_API_KEYstringYes*-API key for telemetry endpoint. Required when TELEMETRY_ENABLED=true
TELEMETRY_BATCH_SIZEintegerNo100Number of telemetry events to batch before sending
TELEMETRY_FLUSH_INTERVALintegerNo30Telemetry flush interval in seconds
TELEMETRY_RETENTION_DAYSintegerNo30Number of days to retain telemetry data

Environment Variables Example:

TELEMETRY_ENABLED=false
TELEMETRY_ENDPOINT=https://telemetry.powermem.ai
TELEMETRY_API_KEY=
TELEMETRY_BATCH_SIZE=100
TELEMETRY_FLUSH_INTERVAL=30
TELEMETRY_RETENTION_DAYS=30

JSON Configuration Example:

{
"telemetry": {
"enable_telemetry": false,
"telemetry_endpoint": "https://telemetry.powermem.ai",
"telemetry_api_key": "",
"telemetry_batch_size": 100,
"telemetry_flush_interval": 30
}
}

Python Dictionary Example:

config = {
'telemetry': {
'enable_telemetry': False,
'telemetry_endpoint': 'https://telemetry.powermem.ai',
'telemetry_api_key': '',
'telemetry_batch_size': 100,
'telemetry_flush_interval': 30
}
}

9. Audit Configuration (Optional)

Audit settings control audit logging for compliance and security.

ConfigurationTypeRequiredDefaultDescription
AUDIT_ENABLEDbooleanNotrueEnable audit logging
AUDIT_LOG_FILEstringNo./logs/audit.logPath to audit log file
AUDIT_LOG_LEVELstringNoINFOAudit log level. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
AUDIT_RETENTION_DAYSintegerNo90Number of days to retain audit logs
AUDIT_COMPRESS_LOGSbooleanNotrueCompress old audit log files
AUDIT_LOG_ROTATION_SIZEstringNo100MBMaximum size of audit log file before rotation (e.g., 100MB, 1GB)

Environment Variables Example:

AUDIT_ENABLED=true
AUDIT_LOG_FILE=./logs/audit.log
AUDIT_LOG_LEVEL=INFO
AUDIT_RETENTION_DAYS=90
AUDIT_COMPRESS_LOGS=true
AUDIT_LOG_ROTATION_SIZE=100MB

JSON Configuration Example:

{
"audit": {
"enabled": true,
"log_file": "./logs/audit.log",
"log_level": "INFO",
"retention_days": 90
}
}

Python Dictionary Example:

config = {
'audit': {
'enabled': True,
'log_file': './logs/audit.log',
'log_level': 'INFO',
'retention_days': 90
}
}

10. Logging Configuration (Optional)

Logging settings control general application logging.

General Logging Settings

ConfigurationTypeRequiredDefaultDescription
LOGGING_LEVELstringNoDEBUGLogging level. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
LOGGING_FORMATstringNo%(asctime)s - %(name)s - %(levelname)s - %(message)sLog message format (Python logging format)
LOGGING_FILEstringNo./logs/powermem.logPath to log file
LOGGING_MAX_SIZEstringNo100MBMaximum size of log file before rotation
LOGGING_BACKUP_COUNTintegerNo5Number of backup log files to keep
LOGGING_COMPRESS_BACKUPSbooleanNotrueCompress old log files

Console Logging Settings

ConfigurationTypeRequiredDefaultDescription
LOGGING_CONSOLE_ENABLEDbooleanNotrueEnable console logging
LOGGING_CONSOLE_LEVELstringNoINFOConsole logging level. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
LOGGING_CONSOLE_FORMATstringNo%(levelname)s - %(message)sConsole log message format

Environment Variables Example:

LOGGING_LEVEL=DEBUG
LOGGING_FORMAT=%(asctime)s - %(name)s - %(levelname)s - %(message)s
LOGGING_FILE=./logs/powermem.log
LOGGING_MAX_SIZE=100MB
LOGGING_BACKUP_COUNT=5
LOGGING_COMPRESS_BACKUPS=true
LOGGING_CONSOLE_ENABLED=true
LOGGING_CONSOLE_LEVEL=INFO
LOGGING_CONSOLE_FORMAT=%(levelname)s - %(message)s

JSON Configuration Example:

{
"logging": {
"level": "DEBUG",
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"file": "./logs/powermem.log"
}
}

Python Dictionary Example:

config = {
'logging': {
'level': 'DEBUG',
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s',
'file': './logs/powermem.log'
}
}

Quick Start Examples

Minimal Development Configuration

Environment Variables:

# Required: Database
DATABASE_PROVIDER=sqlite
SQLITE_PATH=./data/powermem_dev.db

# Required: LLM
LLM_PROVIDER=qwen
LLM_API_KEY=your_api_key_here
LLM_MODEL=qwen-plus

# Required: Embedding
EMBEDDING_PROVIDER=qwen
EMBEDDING_API_KEY=your_api_key_here
EMBEDDING_MODEL=text-embedding-v4
EMBEDDING_DIMS=1536

JSON Configuration:

{
"vector_store": {
"provider": "sqlite",
"config": {
"database_path": "./data/powermem_dev.db"
}
},
"llm": {
"provider": "qwen",
"config": {
"api_key": "your_api_key_here",
"model": "qwen-plus"
}
},
"embedder": {
"provider": "qwen",
"config": {
"api_key": "your_api_key_here",
"model": "text-embedding-v4",
"embedding_dims": 1536
}
}
}

Python Dictionary:

config = {
'vector_store': {
'provider': 'sqlite',
'config': {
'database_path': './data/powermem_dev.db'
}
},
'llm': {
'provider': 'qwen',
'config': {
'api_key': 'your_api_key_here',
'model': 'qwen-plus'
}
},
'embedder': {
'provider': 'qwen',
'config': {
'api_key': 'your_api_key_here',
'model': 'text-embedding-v4',
'embedding_dims': 1536
}
}
}

from powermem import Memory
memory = Memory(config=config)

Production Configuration with OceanBase

Environment Variables:

# Database
DATABASE_PROVIDER=oceanbase
OCEANBASE_HOST=prod-db.example.com
OCEANBASE_PORT=2881
OCEANBASE_USER=prod_user
OCEANBASE_PASSWORD=secure_password
OCEANBASE_DATABASE=powermem_prod
OCEANBASE_EMBEDDING_MODEL_DIMS=1536

# LLM
LLM_PROVIDER=qwen
LLM_API_KEY=production_key
LLM_MODEL=qwen-plus

# Embedding
EMBEDDING_PROVIDER=qwen
EMBEDDING_API_KEY=production_key
EMBEDDING_MODEL=text-embedding-v4
EMBEDDING_DIMS=1536

# Optional: Enable intelligent memory and audit
INTELLIGENT_MEMORY_ENABLED=true
AUDIT_ENABLED=true

JSON Configuration:

{
"vector_store": {
"provider": "oceanbase",
"config": {
"collection_name": "memories",
"connection_args": {
"host": "prod-db.example.com",
"port": 2881,
"user": "prod_user",
"password": "secure_password",
"db_name": "powermem_prod"
},
"embedding_model_dims": 1536,
"vidx_metric_type": "cosine",
"index_type": "IVF_FLAT"
}
},
"llm": {
"provider": "qwen",
"config": {
"api_key": "production_key",
"model": "qwen-plus"
}
},
"embedder": {
"provider": "qwen",
"config": {
"api_key": "production_key",
"model": "text-embedding-v4",
"embedding_dims": 1536
}
},
"intelligent_memory": {
"enabled": true,
"initial_retention": 1.0,
"decay_rate": 0.1,
"reinforcement_factor": 0.3
},
"audit": {
"enabled": true,
"log_file": "./logs/audit.log",
"log_level": "INFO"
}
}

Python Dictionary:

config = {
'vector_store': {
'provider': 'oceanbase',
'config': {
'collection_name': 'memories',
'connection_args': {
'host': 'prod-db.example.com',
'port': 2881,
'user': 'prod_user',
'password': 'secure_password',
'db_name': 'powermem_prod'
},
'embedding_model_dims': 1536,
'vidx_metric_type': 'cosine',
'index_type': 'IVF_FLAT'
}
},
'llm': {
'provider': 'qwen',
'config': {
'api_key': 'production_key',
'model': 'qwen-plus'
}
},
'embedder': {
'provider': 'qwen',
'config': {
'api_key': 'production_key',
'model': 'text-embedding-v4',
'embedding_dims': 1536
}
},
'intelligent_memory': {
'enabled': True,
'initial_retention': 1.0,
'decay_rate': 0.1,
'reinforcement_factor': 0.3
},
'audit': {
'enabled': True,
'log_file': './logs/audit.log',
'log_level': 'INFO'
}
}

from powermem import Memory
memory = Memory(config=config)

Complete Configuration Example (JSON)

Here's a complete JSON configuration file example (config.json) with all optional settings:

{
"vector_store": {
"provider": "sqlite",
"config": {
"database_path": "./data/powermem_dev.db",
"enable_wal": true,
"timeout": 30
}
},
"llm": {
"provider": "qwen",
"config": {
"api_key": "your_api_key_here",
"model": "qwen-plus",
"dashscope_base_url": "https://dashscope.aliyuncs.com/api/v1",
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 0.8,
"top_k": 50,
"enable_search": false
}
},
"embedder": {
"provider": "qwen",
"config": {
"api_key": "your_api_key_here",
"model": "text-embedding-v4",
"embedding_dims": 1536
}
},
"agent_memory": {
"enabled": true,
"mode": "auto",
"default_scope": "AGENT",
"default_privacy_level": "PRIVATE",
"default_collaboration_level": "READ_ONLY",
"default_access_permission": "OWNER_ONLY"
},
"intelligent_memory": {
"enabled": true,
"initial_retention": 1.0,
"decay_rate": 0.1,
"reinforcement_factor": 0.3,
"working_threshold": 0.3,
"short_term_threshold": 0.6,
"long_term_threshold": 0.8
},
"telemetry": {
"enable_telemetry": false,
"telemetry_endpoint": "https://telemetry.powermem.ai",
"telemetry_api_key": "",
"telemetry_batch_size": 100,
"telemetry_flush_interval": 30
},
"audit": {
"enabled": true,
"log_file": "./logs/audit.log",
"log_level": "INFO",
"retention_days": 90
},
"logging": {
"level": "DEBUG",
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"file": "./logs/powermem.log"
}
}

Loading from JSON file:

import json
from powermem import Memory

# Load configuration from JSON file
with open('config.json', 'r') as f:
config = json.load(f)

# Create memory instance
memory = Memory(config=config)