3. QuickstartΒΆ
pconfigs is a generic library for configuring complex systems. Here weβll create a hypothetical trainer for a machine learning model.
3.1. Setup AI agent rules.ΒΆ
Install pconfigs rules for your AI coding agent:
Claude:
python -m pconfigs.claude install(see Claude Integration)Cursor:
python -m pconfigs.cursor install(see Cursor Integration)
Then ask the agent to Create a new runnable pconfigs example module + config file.
3.2. Configure a class.ΒΆ
Create source/modules/trainer.py within your project.
from __future__ import annotations # Forward references of typehints are required.
from pconfigs import pconfig, pconfiged, pdefaults
# Trainer class
@pconfiged(runnable=True) # runnable=True facilitates a main().
class Trainer:
config: TrainerConfig # A class' config is always called 'config'.
def __init__(self): # Constructor takes no arguments.
pass
def main(self, *args, **kwargs):
print(self.config.message) # Config is automatically available.
print(f"Lr: {self.config.lr}")
# Trainer config class.
@pconfig(constructs=Trainer) # We can use this config to construct Trainer.
class TrainerConfig:
message: str # All config parameters are typehinted.
lr: float
# Config defaults.
pdefaults += TrainerConfig( # Set defaults separately from type
message="Hello, World!", # ..definitions (avoids clutter).
lr=1e-4,
)
3.3. Create a config file.ΒΆ
Make source/pconfig/experiment_1.py:
from source.modules.trainer import TrainerConfig
config = TrainerConfig( # Set just the learning rate. Use defaults
lr=1e-3, # ..for the other config parameters.
)
3.4. Run the configured system.ΒΆ
$ python -m pconfigs.run source.pconfig.experiment_1.config
ββββββββββ βββββββββββββββββββββββββ ββββ
runner dotpath attribute
This command constructs Trainer with the config in experiment_1.py and runs Trainer.main(). Specifically, it
Reads the
TrainerConfiginstance inexperiment_1.pyReads the constructable type that has been associated with
TrainerConfig(itβsTrainer)Checks if
Traineris runnable (it is, becauserunnable=True),Constructs
Trainerwithexperiment_1.config, andCalls
main().
The output is,
Hello, World!
Lr: 0.001
This works equally well in machine learning applications, for example
$ torchrun --nproc_per_node=1 -m pconfigs.run source.pconfig.experiment_1.config
Runnable configs can encapsulate all system parameters into a single executable command that takes no parameters. This encapsulation facilitates reproducible experiments.
3.5. Create a revised config file.ΒΆ
A second feature of pconfigs is that you can automatically copy parameters between them.
Make source/pconfig/experiment_2.py:
from source.modules.trainer import TrainerConfig
from source.pconfig.experiment_1 import config as base
config = TrainerConfig(
base, # Copy all parameters from the base experiment.
message="Experiment 2.", # ..except change the message parameter.
)
This automatic copying capability simplifies the process of creating subsequent experiments. (See also Construction.)
$ python -m pconfigs.run source.pconfig.experiment_2.config
prints
Experiment 2.
Lr: 0.001
3.6. Print and compare configs.ΒΆ
A third feature of pconfigs is printing, which documents all parameter values and the paths of all types that define the system.
$ python -m pconfigs.print source.pconfig.experiment_2.config
βββββββββββ ββββ
printer prints repr(config)
This command prints interpretable python code, formatted with Black, as shown below. (See also Printing.)
from source.modules.trainer import TrainerConfig, Trainer
# NOTE: This code is not intended to run. It is for reading and looking up type definitions.
# If you want to run or inspect these objects, import the config from where it is defined.
TrainerConfig(
constructable_type=Trainer, # ClassVar (omit from config)
message="Hello, World!",
lr=0.001,
)
You can
Use the printed import paths and typenames to identify the relevant source code in your project,
Use file diff tools to compare printed configs between experiments.
NOTE: constructable_type=Trainer is not a string ='Trainer'. The config contains the resolved python typename, which unambiguously defines the module in which the code is defined. You do not need to manually create model registries that map strings like 'Trainer' to the resolved typename (a common practice in deep learning codebases). The Python language interpreter does this already.
You can also print within your code. This enables you to log the system configuration when experiments are launched.
>>> from source.pconfig.experiment_2 import config
>>> print(config) # Prints only the config values.
>>> print(repr(config)) # Prints also with import paths, as with pconfigs.print.
3.7. But wait, thereβs more!ΒΆ
This quickstart covers the essentials: defining, running, copying, and printing configs. For more details,
Constructing configs (see Construction)
Running configs (see Runnables)
Printing configs (see Printing)
And many important pconfig features have not been covered here:
Computed properties with
@pproperty(see Properties)Environment variables with
@penv(see Environments)Testing with
pconfigs.test(see Testing)External library configuration with
@pconfiged(mock=True)(see External Libraries)Interpretable enum representations with
@penum(see Enums)