C-ML Documentation Index
Welcome to the C-ML documentation. This index provides an overview of all available documentation.
Getting Started
- README - Project overview and quick start
- Training Guide - Start here to learn how to train neural networks with automatic metrics tracking
- Visualization UI - Interactive training dashboard and model visualization
Core Documentation
Automatic Differentiation
-
Autograd System - Complete guide to automatic differentiation
-
Overview of the autograd system
- API reference for gradient computation
- Usage examples and best practices
-
Technical implementation details
-
Autograd Implementation - Technical deep dive
-
Detailed implementation notes
- Design decisions and architecture
- Performance considerations
- Advanced features
Neural Network Layers
-
Neural Network Layers - Complete layer reference
-
Available layers (Linear, Conv2d, BatchNorm2d, LayerNorm, Pooling, etc.)
- Layer usage examples
- API reference
-
Implementation status
-
Layers Implementation - Implementation details
-
Layer implementation status
- Testing recommendations
- Feature completeness
Training
-
Training Guide - Comprehensive training guide
-
Model definition
- Parameter collection
- Optimizer usage
- Loss functions
- Complete training loop examples
- Training Metrics - Automatic tracking of training, validation, and test metrics
- Visualization - Export metrics to JSON for interactive visualization
- Best practices
Training Metrics
The C-ML library includes built-in training metrics tracking that works automatically:
- Automatic Epoch Timing - Tracks time per epoch and total training time (no manual code needed)
- Loss and Accuracy Tracking - Automatically records training, validation, and test metrics
- Gradient Norm Monitoring - Tracks gradient health during training
- Learning Rate Tracking - Monitors learning rate changes and scheduler information
- Loss Reduction Rate - Computes percentage reduction in loss
- Loss Stability - Calculates standard deviation of recent losses
- Early Stopping Support - Tracks early stopping status (actual vs expected epochs)
- LR Scheduler Visualization - Displays scheduler type and parameters in UI
- Real-time JSON Export - Continuously exports metrics to
training.jsonfor visualization (whenVIZ=1orCML_VIZ=1)
All metrics are captured automatically when using cml_init() and cml_cleanup(). See Training Guide for detailed usage.
Visualization UI
C-ML includes an interactive web-based visualization UI:
- Training Results Dashboard - Real-time visualization of training metrics
- Computational Graph Visualization - Visual representation of ops topology
- Model Architecture View - Interactive model structure visualization using Cytoscape
- Bias-Variance Analysis - Plot training, validation, and test metrics together
- Early Stopping Visualization - Visual indicators for early stopping with actual vs expected epochs
- LR Scheduler Display - Show scheduler type and parameters in metrics panel
- Automatic Launch - Set
VIZ=1to automatically launch before program runs
See README for setup and usage.
Integration
- Integration Summary - Library integration overview
- Build system integration
- Dependency management
- Component integration
- Training metrics integration
Development
- Implementation Status - Current implementation status
- Completed features
- Recently added features
- Implementation notes
- TODO Implementations - Planned features and improvements
- High priority items
- Medium priority items
- Low priority items
- Performance optimizations
Documentation Structure
docs/
├── INDEX.md # This file
├── AUTOGRAD.md # Autograd system guide
├── AUTOGRAD_IMPLEMENTATION.md # Autograd technical details
├── NN_LAYERS.md # Neural network layers reference
├── LAYERS_COMPLETE.md # Layer implementation details
├── TRAINING.md # Training guide (with metrics)
├── INTEGRATION_SUMMARY.md # Integration overview
├── IMPLEMENTATION_STATUS.md # Implementation status
└── TODO_IMPLEMENTATIONS.md # Future work
Quick Links
For Beginners
- Start with README for an overview
- Read Training Guide for a complete example with metrics
- Explore Neural Network Layers for available layers
- Try the Visualization UI to see your training progress
For Advanced Users
- Review Autograd Implementation for technical details
- Check Layers Implementation for implementation status
- See Implementation Status for current features
- Use Training Metrics for comprehensive monitoring
For Contributors
- Review Integration Summary for build system details
- Check TODO Implementations for planned work
- See Implementation Status for current state
- Follow contribution guidelines in README
Examples
Example code is available in the examples/ directory and root:
main.c- Simple XOR classification exampleexamples/test.c- Comprehensive training with train/val/test splits and automatic metricsexamples/early_stopping_lr_scheduler.c- Early stopping and learning rate scheduling exampleexamples/autograd_example.c- Autograd system examplesexamples/training_loop_example.c- Training loop pattern demonstrationexamples/export_graph.c- Graph export for visualization
API Reference
For detailed API documentation, see:
include/cml.h- Main library header with inline documentationinclude/tensor/- Tensor operationsinclude/autograd/- Automatic differentiationinclude/nn/- Neural network componentsinclude/optim/- Optimizersinclude/Core/training_metrics.h- Training metrics API
All header files include comprehensive inline documentation using Doxygen-style comments.
Key Features
Core Library
- Automatic differentiation (autograd)
- Neural network layers
- Optimizers (SGD, Adam)
- Loss functions
- Tensor operations
Training Utilities
- Training Metrics - Built-in automatic metrics tracking (no manual code needed)
- Automatic Timing - Epoch time calculation
- Gradient Monitoring - Gradient norm tracking
- Early Stopping - Track early stopping status (actual vs expected epochs)
- LR Scheduling - Display scheduler type and parameters in UI
- Real-time JSON Export - Continuously export metrics for visualization (when
VIZ=1orCML_VIZ=1) - Centralized Cleanup -
CleanupContextfor resource management - Dataset Splitting -
dataset_split_three()for train/val/test splits - Automatic Evaluation -
training_metrics_evaluate_dataset()for validation/test evaluation
Visualization
- Interactive Dashboard - Real-time training visualization
- Graph Visualization - Computational graph and model architecture
- Bias-Variance Analysis - Training/validation/test metrics comparison
Support
For issues, questions, or contributions, please refer to the project repository.