Approximating Value Functions
in Classifier Systems
February 2005
Lashon B. Booker, The MITRE Corporation
ABSTRACT
While there has been some attention given recently to the issues of function approximation
using learning classifier systems (e.g. [13, 3]), few studies have looked at the quality of the value function approximation computed by a learning classifier system when it solves a reinforcement learning problem [1, 8]. By contrast, considerable attention has been paid to this issue in the reinforcement learning literature [12]. One of the fundamental assumptions underlying algorithms for solving reinforcement learning problems is that states and state-action pairs have well-defined values that can be computed and used to help determine an optimal policy. The quality of those approximations is a critical factor in determining the success of many algorithms in solving reinforcement learning problems.

Additional Search Keywords
N/A
|