Differential Privacy in databases is implemented primarily through noise addition. When a query is made on a database, rather than returning the exact result, the database mechanism injects calibrated random noise into the response. This noise is often drawn from a Laplace or Gaussian distribution centered at zero with a scale related to the value of \(\epsilon\) and the sensitivity of the query. As a result, while the aggregate data can be analyzed, an adversary cannot confidently infer about a specific individual's data.
For machine learning, Differential Privacy is applied in two primary stages: during data collection and during the training process. In data collection, noise is added to ensure individual data contributions remain private. During training, noise is added to gradients in each iteration, ensuring the model does not overfit or memorize individual data points. The privacy budget \(\epsilon\) is consumed over the training, and mechanisms like the moments accountant method can be used to track and control the overall privacy loss.