Artificial camera navigation is the most ubiquitous and principal interaction task within a virtual environment. Efficient and intuitive virtual scene navigation affects other tasks completion performance. Though many scientists have elaborated invaluable navigation techniques design guidelines, it is still, especially for novice users, the really challenging and demanding process. The user interface input hardware imprecision, interface operation cognitive burden put on users and deficiency of direct mapping between user physical movement and virtual camera navigation evoke discrepancy between user desired and actual camera position and orientation. The provided paper concentrates on the new potential field based camera navigation support method. Originally designed and exploited potential fields support not only collisions solving, but goal profiled attraction and camera manoeuvring as well. It works both in static and dynamic environments. It can be easily boosted by the GPU approach and eventually can be easily adapted for advanced or novice interface users for a miscellaneous navigation task completion.