An important, but relatively neglected, aspect of machine models of consciousness is the requirement for a scientific phenomenology, or systematic means of characterizing the experiential states being modeled. In those few cases where need of such a phenomenology is acknowledged, the default approach is usually to use language-based specifications, such as "the visual experience of a red bicycle leaning against a white wall".
Such specifications are problematic for several reasons:
An obvious way to deal with problems 1) and 2) for the case of visual experiences is to use visual images as specifications. However, it would be a mistake to think that even the non-conceptual experience a given robot is modelling is best specified by displaying the raw output of its video camera. For example; the current "output" of a human retina contains gaps or blindspots that are not part of experience. Furthermore, our visual experience, as opposed to our retinal output, at any given time is stable, encompassing more than the current region of foveation, and is coloured to the periphery.
I propose a means of specification, a synthetic phenomenology, that does justice to these aspects of visual experience, by exploiting the interdependencies of perception and action of both the robot and the theorist to whom the specification is presented. In this way, some progress is also made toward overcoming problems 3) and 4).