Core Functions

kandc.init()

Initialize a new experiment run with configuration and tracking.
kandc.init(
    project: Optional[str] = None,
    name: Optional[str] = None,
    config: Optional[Dict[str, Any]] = None,
    tags: Optional[List[str]] = None,
    notes: Optional[str] = None,
    dir: Optional[Union[str, Path]] = None,
    mode: Optional[str] = None,
    reinit: bool = False,
    open_browser: bool = True,
    capture_code: bool = True,
    code_exclude_patterns: Optional[List[str]] = None,
    **kwargs
) -> Run
Parameters:
  • project (str, optional): Project name for organization
  • name (str, optional): Run name (auto-generated if not provided)
  • config (dict, optional): Hyperparameters and configuration to track
  • tags (list, optional): Tags for filtering and organization
  • notes (str, optional): Optional description or notes
  • dir (str/Path, optional): Directory to save run data
  • mode (str, optional): Run mode - “online”, “offline”, or “disabled”
  • reinit (bool): Whether to reinitialize if already initialized
  • open_browser (bool): Whether to open dashboard in browser (online mode)
  • capture_code (bool): Whether to capture code snapshot
  • code_exclude_patterns (list, optional): Patterns to exclude from code capture
Returns: Run object representing the initialized experiment Examples:
# Basic initialization
run = kandc.init(
    project="optimize-transformer",
    name="test-run-1",
    config={"d_model": 64, "nhead": 4, "num_layers": 2, "seq_len": 16},
    tags=["transformer", "pytorch"],
    notes="Testing transformer architecture optimization"
)

# Disable code capture
run = kandc.init(
    project="optimize-transformer",
    capture_code=False  # No code snapshot
)

# Custom code exclusions
run = kandc.init(
    project="optimize-transformer",
    capture_code=True,
    code_exclude_patterns=[
        "*.pth",        # Model files
        "data/",        # Data directory
        "experiments/", # Experiment outputs
        "*.log"         # Log files
    ]
)

kandc.log()

Log metrics and data during experiment execution.
kandc.log(
    data: Dict[str, Any],
    x: Optional[float] = None
) -> None
Parameters:
  • data (dict): Dictionary of metrics to log
  • x (float, optional): Custom x-axis value (step, epoch, time, etc.)
Example:
# Basic logging
kandc.log({"loss": 0.25, "accuracy": 0.92})

# With custom x-axis
kandc.log({"val_loss": 0.18}, x=100)

# Complex data structures
kandc.log({
    "model_config": {"hidden_size": 128, "layers": 3},
    "dataset_stats": {"train_size": 50000, "val_size": 10000}
})

kandc.finish()

Finish the current run and ensure all data is saved and synced.
kandc.finish() -> None
Example:
try:
    kandc.init(project="my-project")
    # Your experiment code
    kandc.log({"metric": value})
finally:
    kandc.finish()  # Always finish, even on error

kandc.get_current_run()

Get the currently active run object.
kandc.get_current_run() -> Optional[Run]
Returns: Current Run object or None if not initialized Example:
run = kandc.get_current_run()
if run:
    print(f"Current run: {run.name}")
    print(f"Run ID: {run.id}")

kandc.is_initialized()

Check if kandc is currently initialized.
kandc.is_initialized() -> bool
Returns: True if initialized, False otherwise Example:
if not kandc.is_initialized():
    kandc.init(project="my-project")

PyTorch Profiling

kandc.ProfilerWrapper

Wrap any object with PyTorch profiler integration.
from kandc.annotators import ProfilerWrapper

wrapper = ProfilerWrapper(
    obj: Any,
    name: Optional[str] = None,
    activities: Optional[List[str]] = None,
    record_shapes: bool = True,
    profile_memory: bool = True,
    with_stack: bool = True
)
Parameters:
  • obj: The object to wrap and profile
  • name (str, optional): Name for the wrapped object in logs
  • activities (list, optional): Activities to profile [‘cpu’, ‘cuda’]. Defaults to both.
  • record_shapes (bool): Whether to record tensor shapes
  • profile_memory (bool): Whether to profile memory usage
  • with_stack (bool): Whether to record call stacks
Example:
model = MyModel()
profiled_model = ProfilerWrapper(
    model,
    name="MyModel", 
    activities=['cpu', 'cuda'],
    record_shapes=True,
    profile_memory=True
)

# All method calls are now profiled
result = profiled_model.forward(data)

kandc.ProfilerDecorator

Decorator for automatic PyTorch profiling of classes or functions.
from kandc.annotators import ProfilerDecorator

@ProfilerDecorator(
    name: Optional[str] = None,
    activities: Optional[List[str]] = None,
    record_shapes: bool = True,
    profile_memory: bool = True,
    with_stack: bool = True
)
Example:
@ProfilerDecorator(name="OptimizedModel", record_shapes=True)
class OptimizedModel:
    def predict(self, x):
        return x * 2
    
    def train(self, data):
        return self.predict(data)

kandc.profile()

Convenience function to wrap an object with PyTorch profiling.
kandc.profile(
    obj: Any,
    name: Optional[str] = None,
    activities: Optional[List[str]] = None,
    record_shapes: bool = True,
    profile_memory: bool = True,
    with_stack: bool = True
) -> ProfilerWrapper
Example:
model = MyModel()
profiled_model = kandc.profile(model, name="MyModel")

kandc.profiler()

Convenience function to create a PyTorch profiler decorator.
kandc.profiler(
    name: Optional[str] = None,
    activities: Optional[List[str]] = None,
    record_shapes: bool = True,
    profile_memory: bool = True,
    with_stack: bool = True
) -> ProfilerDecorator
Example:
@kandc.profiler(name="MyFunction", record_shapes=True)
def my_function(x):
    return expensive_computation(x)

Model Profiling

kandc.capture_model_class()

Decorator to automatically profile PyTorch model classes.
kandc.capture_model_class(
    model_name: Optional[str] = None,
    record_shapes: bool = True,
    profile_memory: bool = True,
    **profiler_kwargs: Any
) -> Callable
Parameters:
  • model_name (str, optional): Name for the model traces (defaults to class name)
  • record_shapes (bool): Whether to record tensor shapes
  • profile_memory (bool): Whether to profile memory usage
  • **profiler_kwargs: Additional PyTorch profiler arguments
Example:
@kandc.capture_model_class(
    model_name="SimpleTransformer",
    record_shapes=True,
    profile_memory=True
)
class SimpleTransformer(nn.Module):
    def __init__(self, input_dim=32, seq_len=16, d_model=64, nhead=4, num_layers=2, num_classes=10):
        super().__init__()
        self.input_dim = input_dim
        self.seq_len = seq_len
        self.d_model = d_model

        # Project input to d_model
        self.input_proj = nn.Linear(input_dim, d_model)

        # Positional encoding (learnable)
        self.pos_embedding = nn.Parameter(torch.zeros(1, seq_len, d_model))

        # Transformer encoder
        encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, nhead=nhead, batch_first=True)
        self.transformer = nn.TransformerEncoder(encoder_layer, num_layers=num_layers)

        # Output head
        self.head = nn.Sequential(
            nn.LayerNorm(d_model),
            nn.Linear(d_model, num_classes)
        )

    def forward(self, x):
        # x: (batch, seq_len, input_dim)
        x = self.input_proj(x)  # (batch, seq_len, d_model)
        x = x + self.pos_embedding  # Add positional encoding
        x = self.transformer(x)  # (batch, seq_len, d_model)
        x = x.mean(dim=1)  # Pool over sequence
        x = self.head(x)   # (batch, num_classes)
        return x

kandc.capture_model_instance()

Wrap an existing model instance for profiling.
kandc.capture_model_instance(
    model_instance: Any,
    model_name: Optional[str] = None,
    record_shapes: bool = True,
    profile_memory: bool = True,
    **profiler_kwargs: Any
) -> Any
Parameters:
  • model_instance: The model instance to wrap
  • model_name (str, optional): Name for the model traces
  • record_shapes (bool): Whether to record tensor shapes
  • profile_memory (bool): Whether to profile memory usage
  • **profiler_kwargs: Additional profiler arguments
Returns: Wrapped model instance that profiles every forward pass Example:
# Existing transformer model
model = SimpleTransformer(input_dim=32, seq_len=16, d_model=64, nhead=4, num_layers=2, num_classes=10)

# Wrap for profiling
model = kandc.capture_model_instance(
    model,
    model_name="SimpleTransformer_Instance",
    record_shapes=True,
    profile_memory=True
)

# All forward passes now profiled
data = torch.randn(32, 16, 32)  # batch_size=32, seq_len=16, input_dim=32
output = model(data)

kandc.capture_trace()

Decorator to profile any function execution.
kandc.capture_trace(
    trace_name: Optional[str] = None,
    record_shapes: bool = False,
    profile_memory: bool = False,
    **profiler_kwargs: Any
) -> Callable
Parameters:
  • trace_name (str, optional): Name for the trace (defaults to function name)
  • record_shapes (bool): Whether to record tensor shapes
  • profile_memory (bool): Whether to profile memory usage
  • **profiler_kwargs: Additional profiler arguments
Example:
@kandc.capture_trace(
    trace_name="data_preprocessing",
    record_shapes=True
)
def preprocess_batch(images, labels):
    # Preprocessing code - automatically traced
    return processed_images, labels

kandc.parse_model_trace()

Parse and analyze a model trace file.
kandc.parse_model_trace(
    trace_file: str,
    model_name: str = "Unknown"
) -> Optional[Dict]
Parameters:
  • trace_file (str): Path to the trace file
  • model_name (str): Name of the model for analysis
Returns: Dictionary containing parsed trace analysis or None if parsing fails

Timing Functions

kandc.timed()

Decorator to time function execution.
kandc.timed(
    name: Optional[str] = None
) -> Callable
Parameters:
  • name (str, optional): Name for the timing record (defaults to function name)
Example:
@kandc.timed(name="random_wait")
def random_wait():
    print("⏳ Starting random wait...")
    time.sleep(random.random() * 2)
    print("✅ Random wait complete")
    return "processing_complete"

# Usage
result = random_wait()  # Timing automatically recorded

kandc.timed_call()

Time a function call without using a decorator.
kandc.timed_call(
    name: str,
    fn: Callable[..., Any],
    *args: Any,
    **kwargs: Any
) -> Any
Parameters:
  • name (str): Name for the timing record
  • fn (callable): Function to time
  • *args: Positional arguments for the function
  • **kwargs: Keyword arguments for the function
Returns: Result of the function call Example:
# Time an existing function
model = SimpleTransformer()
data = torch.randn(32, 16, 32)
result = kandc.timed_call("model_forward", model, data)

# Time a lambda or inline function
processed = kandc.timed_call("data_processing", lambda: time.sleep(0.1), )

API Client (Advanced)

kandc.APIClient

Low-level API client for direct backend communication.
client = kandc.APIClient(
    base_url: str = KANDC_BACKEND_URL,
    api_key: Optional[str] = None
)
Methods:
  • authenticate_with_browser(): Browser-based authentication
  • create_project(name, description, tags, metadata): Create new project
  • create_run(project_name, run_data): Create new run
  • log_metrics(run_id, metrics, step): Log metrics to run
  • create_artifact(run_id, artifact_data, file_path): Upload artifact
  • get_dashboard_url(project_id, run_id): Get dashboard URL

Authentication

kandc.get_api_key()

Get the current API key.
kandc.get_api_key() -> Optional[str]
Returns: Current API key or None if not authenticated

kandc.ensure_authenticated()

Ensure user is authenticated, prompting if necessary.
kandc.ensure_authenticated() -> APIClient
Returns: Authenticated API client

Run Object

The Run object represents an active experiment run.

Properties

  • run.id (str): Unique run identifier
  • run.name (str): Run name
  • run.project (str): Project name
  • run.dir (Path): Local directory for run data

Methods

  • run.log(data, x): Log metrics to this run
  • run.log_artifact(path, name): Log artifact file
  • run.finish(): Finish this run
  • run.get_dashboard_url(): Get dashboard URL for this run
  • run.open_dashboard(): Open dashboard in browser

Run Modes

Online Mode (Default)

kandc.init(project="my-project")  # mode="online" is default
  • Full cloud functionality
  • Real-time dashboard
  • Authentication required
  • Internet connection required

Offline Mode

kandc.init(project="my-project", mode="offline")
  • Local-only operation
  • No authentication required
  • No internet required
  • Data saved to local directory

Disabled Mode

kandc.init(project="my-project", mode="disabled")
  • No-op mode (zero overhead)
  • All kandc calls do nothing
  • Perfect for production deployment

Error Handling

kandc.APIError

Base exception for API-related errors.

kandc.AuthenticationError

Exception raised for authentication failures. Example:
try:
    kandc.init(project="my-project")
except kandc.AuthenticationError:
    print("Authentication failed")
except kandc.APIError as e:
    print(f"API error: {e}")

Code Snapshot Configuration

kandc automatically captures your source code for experiment reproducibility. You can control this behavior in several ways:

Disable Code Capture Completely

# Method 1: In kandc.init()
kandc.init(
    project="my-project",
    capture_code=False  # Disables all code capture
)

# Method 2: Environment variable
export KANDC_CAPTURE_CODE=false

Custom Exclude Patterns

# Exclude specific files and directories
kandc.init(
    project="my-project",
    code_exclude_patterns=[
        # Model files
        "*.pth", "*.safetensors", "*.bin", "*.ckpt",
        
        # Data directories
        "data/", "datasets/", "cache/",
        
        # Experiment outputs
        "outputs/", "results/", "logs/",
        
        # Temporary files
        "*.tmp", "*.temp", "temp_*",
        
        # Large files
        "*.csv", "*.parquet", "*.h5",
        
        # IDE files
        ".vscode/", ".idea/", "*.swp"
    ]
)

Respect .gitignore

kandc automatically respects your .gitignore file. Add patterns there to exclude them from code capture:
# .gitignore
*.pth
*.safetensors
data/
experiments/
*.log

What Gets Captured

Source Files:
  • Python: .py
  • JavaScript/TypeScript: .js, .ts, .jsx, .tsx
  • Other languages: .java, .cpp, .c, .go, .rs, .rb
  • Scripts: .sh, .bash, .zsh, .ps1, .bat
  • Config: .yaml, .yml, .json, .toml, .ini
  • Documentation: .md, .rst, .txt
Project Files:
  • requirements.txt, pyproject.toml
  • package.json, Dockerfile
  • .gitignore, .env.example

Environment Variables for Code Capture

# Disable code capture globally
export KANDC_CAPTURE_CODE=false

# Set default exclude patterns
export KANDC_CODE_EXCLUDE="*.pth,data/,experiments/"

# Set maximum file size (in bytes)
export KANDC_MAX_FILE_SIZE=1048576  # 1MB

Environment Variables

Configure kandc behavior via environment variables: Core Configuration:
  • KANDC_BACKEND_URL: Backend server URL
  • KANDC_PROJECT: Default project name
  • KANDC_MODE: Default run mode (“online”, “offline”, “disabled”)
  • KANDC_API_KEY: API key for authentication
Profiling Configuration:
  • KANDC_PROFILER_DISABLED: Disable PyTorch profiler (“1” to disable)
Code Capture Configuration:
  • KANDC_CAPTURE_CODE: Enable/disable code capture (“true”, “false”)
  • KANDC_CODE_EXCLUDE: Comma-separated exclude patterns
  • KANDC_MAX_FILE_SIZE: Maximum file size in bytes for code capture
Example:
# Core configuration
export KANDC_MODE="offline"
export KANDC_PROJECT="default-project"

# Disable PyTorch profiler
export KANDC_PROFILER_DISABLED=1

# Disable code capture
export KANDC_CAPTURE_CODE="false"

# Or configure code exclusions
export KANDC_CODE_EXCLUDE="*.pth,data/,experiments/,*.log"
export KANDC_MAX_FILE_SIZE="2097152"  # 2MB

python my_experiment.py