Show HN: Charl – ML language with native tensors and autograd

2 hours ago 2
// Neural network training with automatic differentiation let X = tensor([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]], [4, 2]) let Y = tensor([0.0, 1.0, 1.0, 0.0], [4, 1]) // Initialize parameters with gradient tracking let W1 = tensor_with_grad([0.5, -0.3, 0.2, 0.4], [2, 2]) let b1 = tensor_with_grad([0.1, -0.1], [2]) // Create optimizer let optimizer = adam_create(0.01) let epoch = 0 while epoch < 100 { // Forward pass let h1 = nn_linear(X, W1, b1) let pred = nn_sigmoid(h1) // Compute loss let loss = nn_mse_loss(pred, Y) // Backward pass - automatic differentiation tensor_backward(loss) // Update parameters let params = [W1, b1] let updated = adam_step(optimizer, params) W1 = updated[0] b1 = updated[1] epoch = epoch + 1 }

Automatic gradient computation and optimization

Read Entire Article