Papers
Topics
Authors
Recent
Search
2000 character limit reached

Risk Directed Importance Sampling in Stochastic Dual Dynamic Programming with Hidden Markov Models for Grid Level Energy Storage

Published 16 Jan 2020 in math.OC | (2001.06026v2)

Abstract: Power systems that need to integrate renewables at a large scale must account for the high levels of uncertainty introduced by these power sources. This can be accomplished with a system of many distributed grid-level storage devices. However, developing a cost-effective and robust control policy in this setting is a challenge due to the high dimensionality of the resource state and the highly volatile stochastic processes involved. We first model the problem using a carefully calibrated power grid model and a specialized hidden Markov stochastic model for wind power which replicates crossing times. We then base our control policy on a variant of stochastic dual dynamic programming, an algorithm well suited for certain high dimensional control problems, that is modified to accommodate hidden Markov uncertainty in the stochastics. However, the algorithm may be impractical to use as it exhibits relatively slow convergence. To accelerate the algorithm, we apply both quadratic regularization and a risk-directed importance sampling technique for sampling the outcome space at each time step in the backward pass of the algorithm. We show that the resulting policies are more robust than those developed using classical SDDP modeling assumptions and algorithms.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.