Automatically planning camera shots in virtual 3D environments requires solving problems similar to those faced by human cinematographers. In the most essential terms, each shot must communicate some specified visual message or goal. Consequently, the camera must be carefully staged to clearly view the relevant subject(s), properly emphasize the important elements in the shot, and compose an engaging image that holds the viewer’s attention. The constraint-based approach to camera planning in virtual 3D environments is built upon the assumption that camera shots are composed to communicate a specified visual message expressed in the form of constraints on how subjects appear in the frame. A human user or intelligent software system issues a request to visualize subjects of interest and specifies how each should be viewed, then a constraint solver attempts to find a solution camera shot. A camera shot can be determined by a set of constraints on objects in the scene or on the camera itself. The constraint solver then attempts to find values for each camera parameter so that the given constraints are satisfied. This paper presents a work in progress snapshot of the virtual camera constraint model that we are currently developing.