This is the second in a series of posts, “Gradients for Grown-Ups”. You can see the previous post here. Last time, we got started with the example \(f(x) = c^T x\) to showcase the general procedure for computing gradients: Express \(f(x)\) in “coordinate form” (e.g. with explicit summations). Compute \(\frac{\partial f}{\partial x_k}\) ov... Read more 10 May 2018 - 5 minute read
This is the first in a series of posts, “Gradients for Grown-Ups”. You can see the next post here. Boyd and Vanderberghe’s Convex Optimization1 is an excellent primer for learning the fundamentals of the subject. I should know, because I just took a course which featured this book. I didn’t always pay attention in class (because it was my sixth... Read more 09 May 2018 - 5 minute read
I am always delighted when abstraction makes a problem noticeably easier to solve. Today I encountered (Jacod 1999) the following homework question: Let \(Y \sim N(0,1)\) be standard-normally disributed, and let \(a > 0\). Let \[\newcommand{\abs}[1]{\vert#1\vert} Z = \begin{cases} Y & |Y| \leq a, \\ -Y & |Y| > a. \end{cases}... Read more 04 Apr 2018 - 3 minute read