Urho3D/Docs/Reference.dox
2021-07-17 16:43:46 +00:00

3919 lines
314 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

namespace Urho3D
{
/**
\page Containers Container types
Urho3D implements its own string type and template containers instead of using STL. The rationale for this consists of the following:
- Increased performance in some cases, for example when using the PODVector class.
- Guaranteed binary size of strings and containers, to allow eg. embedding inside the Variant object.
- Reduced compile time.
- Straightforward naming and implementation that aids in debugging and profiling.
- Convenient member functions can be added, for example String::Split() or Vector::Compact().
- Consistency with the rest of the classes, see \ref CodingConventions "Coding conventions".
The classes in question are String, Vector, PODVector, List, HashSet and HashMap. PODVector is only to be used when the elements of the vector need no construction or destruction and can be moved with a block memory copy.
The list, set and map classes use a fixed-size allocator internally. This can also be used by the application, either by using the procedural functions AllocatorInitialize(), AllocatorUninitialize(), AllocatorReserve() and AllocatorFree(), or through the template class Allocator.
In script, the String class is exposed as it is. The template containers can not be directly exposed to script, but instead a template Array type exists, which behaves like a Vector, but does not expose iterators. In addition the VariantMap is available, which is a HashMap<StringHash, Variant>.
\section Containers_cxx11 C++11 features
Aggregate initializers:
\code
VariantMap parameters = { {"Key1", "Value1"}, {"Key2", "Value2"} };
\endcode
Range-base for loop:
\code
for (auto&& item: container)
{
}
\endcode
\page ObjectTypes Object types and factories
Classes that derive from Object contain type-identification, they can be created through object factories, and they can send and receive \ref Events "events". Examples of these are all Component, Resource and UIElement subclasses. To be able to be constructed by a factory, they need to have a constructor that takes a Context pointer as the only parameter.
%Object factory registration and object creation through factories are directly accessible only in C++, not in script.
The definition of an Object subclass must contain the URHO3D_OBJECT(className, baseTypeName) macro. Type identification is available both as text (GetTypeName() or GetTypeNameStatic()) and as a 32-bit hash of the type name (GetType() or GetTypeStatic()).
To register an object factory for a specific type, call the \ref Context::RegisterFactory "RegisterFactory()" template function on Context. You can get its pointer from any Object either via the \ref Object::context_ "context_" member variable, or by calling \ref Object::GetContext "GetContext()". An example:
\code
context_->RegisterFactory<MyClass>();
\endcode
To create an object using a factory, call Context's \ref Context::CreateObject "CreateObject()" function. This takes the 32-bit hash of the type name as a parameter. The created object (or null if there was no matching factory registered) will be returned inside a SharedPtr<Object>. For example:
\code
SharedPtr<Object> newComponent = context_->CreateObject(type));
\endcode
\page Subsystems Subsystems
Any Object can be registered to the Context as a subsystem, by using the function \ref Context::RegisterSubsystem "RegisterSubsystem()". They can then be accessed by any other Object inside the same context by calling \ref Object::GetSubsystem "GetSubsystem()". Only one instance of each object type can exist as a subsystem.
After Engine initialization, the following subsystems will always exist:
- Time: manages frame updates, frame number and elapsed time counting, and controls the frequency of the operating system low-resolution timer.
- WorkQueue: executes background tasks in worker threads.
- FileSystem: provides directory operations.
- Log: provides logging services.
- ResourceCache: loads resources and keeps them cached for later access.
- Network: provides UDP networking and scene replication.
- Input: handles keyboard and mouse input. Will be inactive in headless mode.
- UI: the graphical user interface. Will be inactive in headless mode.
- Audio: provides sound output. Will be inactive if sound disabled.
- Engine: creates the other subsystems and controls the main loop iteration and framerate limiting.
The following subsystems are optional, so GetSubsystem() may return null if they have not been created:
- Profiler: Provides hierarchical function execution time measurement using the operating system performance counter. Exists if profiling has been compiled in (configurable from the root CMakeLists.txt)
- EventProfiler: Same as Profiler but for events.
- Graphics: Manages the application window, the rendering context and resources. Exists if not in headless mode.
- Renderer: Renders scenes in 3D and manages rendering quality settings. Exists if not in headless mode.
- Script: Provides the AngelScript execution environment. Needs to be created and registered manually.
- Console: provides an interactive AngelScript console and log display. Created by calling \ref Engine::CreateConsole "CreateConsole()".
- DebugHud: displays rendering mode information and statistics and profiling data. Created by calling \ref Engine::CreateDebugHud "CreateDebugHud()".
- Database: Manages database connections. The build option for the database support needs to be enabled when building the library.
In script, the subsystems are available through the following global properties:
time, fileSystem, log, cache, network, input, ui, audio, engine, graphics, renderer, script, console, debugHud, database. Note that WorkQueue and Profiler are not available to script due to their low-level nature.
\page Events Events
The Urho3D event system allows for data transport and function invocation without the sender and receiver having to explicitly know of each other. Both the event sender and receiver must derive from Object. An event receiver must subscribe to each event type it wishes to receive: one can either subscribe to the event coming from any sender, or from a specific sender. The latter is useful for example when handling events from the user interface elements.
Events themselves do not need to be registered. They are identified by 32-bit hashes of their names. Event parameters (the data payload) are optional and are contained inside a VariantMap, identified by 32-bit parameter name hashes. For the inbuilt Urho3D events, event type (E_UPDATE, E_KEYDOWN, E_MOUSEMOVE etc.) and parameter hashes (P_TIMESTEP, P_DX, P_DY etc.) are defined as namespaced constants inside include files such as CoreEvents.h or InputEvents.h, using the helper macros URHO3D_EVENT & URHO3D_PARAM.
When subscribing to an event, a handler function must be specified. In C++ these must have the signature void HandleEvent(StringHash eventType, VariantMap& eventData). The URHO3D_HANDLER(className, function) macro helps in defining the required class-specific function pointers. For example:
\code
SubscribeToEvent(E_UPDATE, URHO3D_HANDLER(MyClass, MyEventHandler));
\endcode
In script events are identified by their string names instead of name hashes (though these are internally converted to hashes.) %Script event handlers can either have the same signature as in C++, or a simplified signature void HandleEvent() when event type and parameters are not required. The same event subscription would look like:
\code
SubscribeToEvent("Update", "MyEventHandler");
\endcode
In C++ events must always be handled by a member function. In script procedural event handling is also possible; in this case the ScriptFile where the event handler function is located becomes the event receiver. See \ref Scripting "Scripting" for more details.
Events can also be unsubscribed from. See \ref Object::UnsubscribeFromEvent "UnsubscribeFromEvent()" for details.
To send an event, fill the event parameters (if necessary) and call \ref Object::SendEvent "SendEvent()". For example, this (in C++) is how the Engine subsystem sends the Update event on each frame. For performance reason, in C++ the same map objects are being reused in each frame by calling \ref Context::GetEventDataMap "GetEventDataMap()" instead of creating a new VariantMap object each time. Note the parameter name hashes being inside a namespace which matches the event name:
\code
using namespace Update;
VariantMap& eventData = GetEventDataMap();
eventData[P_TIMESTEP] = timeStep_;
SendEvent(E_UPDATE, eventData);
\endcode
In script event parameters, like event types, are referred to with strings, so the same code would look like:
\code
VariantMap eventData;
eventData["TimeStep"] = timeStep;
SendEvent("Update", eventData);
\endcode
\section Events_AnotherObject Sending events through another object
Because the \ref Object::SendEvent "SendEvent()" function is public, an event can be "masqueraded" as originating from any object, even when not actually sent by that object's member function code. This can be used to simplify communication, particularly between components in the scene. For example, the \ref Physics "physics simulation" signals collision events by using the participating \ref Node "scene nodes" as senders. This means that any component can easily subscribe to its own node's collisions without having to know of the actual physics components involved. The same principle can also be used in any game-specific messaging, for example making a "damage received" event originate from the scene node, though it itself has no concept of damage or health.
\section Events_cxx11 C++11 event binding and sending
Events can be bound to lambda functions including capturing context:
\code
SubscribeToEvent(E_UPDATE, [&](StringHash type, VariantMap& args) {
});
\endcode
std::bind() class methods:
\code
void MyObject::OnUpdate(StringHash type, VariantMap& args)
{
}
SubscribeToEvent(E_UPDATE, std::bind(&MyObject::OnUpdate, this, std::placeholders::_1, std::placeholders::_2)));
\endcode
std::bind() discarding unneeded parameters:
\code
void Class::OnUpdate(VariantMap& args)
{
}
using namespace std::placeholders;
SubscribeToEvent(E_UPDATE, std::bind(&Class::OnUpdate, this, _2)));
\endcode
There is a convenient method to send event using C++ variadic template which reduces the amount of boilerplate code required. Using the same example above, in C++11 standard the code can be rewritten as:
\code
using namespace Update;
SendEvent(E_UPDATE, P_TIMESTEP, timeStep_);
\endcode
There is only one parameter pair in the above example, however, this overload method accepts any number of parameter pairs.
\page MainLoop Engine initialization and main loop
Before a Urho3D application can enter its main loop, the Engine subsystem object must be created and initialized by calling its \ref Engine::Initialize "Initialize()" function. Parameters sent in a VariantMap can be used to direct how the Engine initializes itself and the subsystems. One way to configure the parameters is to parse them from the command line like the Urho3DPlayer application does: this is accomplished by the helper function \ref Engine::ParseParameters "ParseParameters()".
The full list of supported parameters, their datatypes and default values: (also defined as constants in Engine/EngineDefs.h)
- Headless (bool) Headless mode enable. Default false.
- LogLevel (int) %Log verbosity level. Default LOG_INFO in release builds and LOG_DEBUG in debug builds.
- LogQuiet (bool) %Log quiet mode, ie. to not write warning/info/debug log entries into standard output. Default false.
- LogName (string) %Log filename. Default "Urho3D.log".
- FrameLimiter (bool) Whether to cap maximum framerate to 200 (desktop) or 60 (Android/iOS/tvOS). Default true.
- WorkerThreads (bool) Whether to create worker threads for the %WorkQueue subsystem according to available CPU cores. Default true.
- %EventProfiler (bool) Whether to create the EventProfiler subsystem. Default true.
- ResourcePrefixPaths (string) A semicolon-separated list of resource prefix paths to use. If not specified then the default prefix path is set to executable path. The resource prefix paths can also be defined using URHO3D_PREFIX_PATH env-var. When both are defined, the paths set by -pp takes higher precedence.
- ResourcePaths (string) A semicolon-separated list of resource paths to use. If corresponding packages (ie. Data.pak for Data directory) exist they will be used instead. Default "Data;CoreData".
- ResourcePackages (string) A semicolon-separated list of resource packages to use. Default empty.
- AutoloadPaths (string) A semicolon-separated list of autoload paths to use. Any resource packages and subdirectories inside an autoload path will be added to the resource system. Default "Autoload".
- ExternalWindow (void ptr) External window handle to use instead of creating an application window. Default null.
- WindowIcon (string) %Window icon image resource name. Default empty (use application default icon.)
- WindowTitle (string) %Window title. Default "Urho3D".
- WindowWidth (int) %Window horizontal dimension. Default 0 (use desktop resolution, or 1024 in windowed mode.)
- WindowHeight (int) %Window vertical dimension. Default 0 (use desktop resolution, or 768 in windowed mode.)
- WindowPositionX (int) %Window horizontal position. Default center to screen.
- WindowPositionY (int) %Window vertical position. Default center to screen.
- FullScreen (bool) Whether to create a full-screen window. Default true.
- Borderless (bool) Whether to create the window as borderless. Default false.
- WindowResizable (bool) Whether window is resizable. Default false.
- HighDPI (bool) Whether window is high DPI. Default true. Currently only supported by Apple platforms (macOS, iOS, and tvOS).
- TripleBuffer (bool) Whether to use triple-buffering. Default false.
- VSync (bool) Whether to wait for vertical sync when presenting rendering window contents. Default false.
- FlushGPU (bool) Whether to flush GPU command buffer each frame (Direct3D9) or limit the amount of buffered frames (Direct3D11) for less input latency. Ineffective on OpenGL. Default false.
- ForceGL2 (bool) When true, forces OpenGL 2 use even if OpenGL 3 is available. No effect on Direct3D or mobile builds. Default false.
- Multisample (int) Hardware multisampling level. Default 1 (no multisampling.)
- Orientations (string) Space-separated list of allowed orientations. Effective only on iOS. All possible values are "LandscapeLeft", "LandscapeRight", "Portrait" and "PortraitUpsideDown". Default "LandscapeLeft LandscapeRight".
- Monitor (int) Monitor number to use. 0 is the default (primary) monitor.
- RefreshRate (int) Monitor refresh rate in Hz to use.
- DumpShaders (string) Filename to dump used shader variations to for precaching.
- %RenderPath (string) Default renderpath resource name. Default empty, which causes forward rendering (bin/CoreData/RenderPaths/Forward.xml) to be used.
- Shadows (bool) Shadow rendering enable. Default true.
- LowQualityShadows (bool) Low-quality (1 sample) shadow mode. Default false.
- MaterialQuality (int) %Material quality level. Default 2 (high)
- TextureQuality (int) %Texture quality level. Default 2 (high)
- TextureFilterMode (int) %Texture default filter mode. Default 2 (trilinear)
- TextureAnisotropy (int) %Texture anisotropy level. Default 4. This has only effect for anisotropically filtered textures.
- %Sound (bool) %Sound enable. Default true.
- SoundBuffer (int) %Sound buffer length in milliseconds. Default 100.
- SoundMixRate (int) %Sound output frequency in Hz. Default 44100.
- SoundStereo (bool) Stereo sound output mode. Default true.
- SoundInterpolation (bool) Interpolated sound output mode to improve quality. Default true.
- TouchEmulation (bool) %Touch emulation on desktop platform. Default false.
- ShaderCacheDir (string) Shader binary cache directory for Direct3D. Default "urho3d/shadercache" within the user's application preferences directory.
- PackageCacheDir (string) Package cache directory for Network subsystem. Not specified by default.
\section MainLoop_Frame Main loop iteration
The main loop iteration (also called a frame) is driven by the Engine. In contrast it is the program's (for example Urho3DPlayer) responsibility to continuously loop this iteration by calling \ref Engine::RunFrame "RunFrame()". This function calls in turn the Time subsystem's \ref Time::BeginFrame "BeginFrame()" and \ref Time::EndFrame "EndFrame()" functions, and sends various update events in between. The event order is:
- E_BEGINFRAME: signals the beginning of the new frame. Input and Network react to this to check for operating system window messages and arrived network packets.
- E_UPDATE: application-wide logic update event. By default each update-enabled Scene reacts to this and triggers the scene update (more on this below.)
- E_POSTUPDATE: application-wide logic post-update event. The UI subsystem updates its logic here.
- E_RENDERUPDATE: Renderer updates its viewports here to prepare for rendering, and the UI generates render commands necessary to render the user interface.
- E_POSTRENDERUPDATE: by default nothing hooks to this. This can be used to implement logic that requires the rendering views to be up-to-date, for example to do accurate raycasts. Scenes may not be modified at this point; especially scene objects may not be deleted or crashes may occur.
- E_ENDFRAME: signals the end of the frame. Before this, rendering the frame and measuring the next frame's timestep will have occurred.
The update of each Scene causes further events to be sent:
- E_SCENEUPDATE: variable timestep scene update. This is a good place to implement any scene logic that does not need to happen at a fixed step.
- E_SCENESUBSYSTEMUPDATE: update scene-wide subsystems. Currently only the PhysicsWorld component listens to this, which causes it to step the physics simulation and send the following two events for each simulation step:
- E_PHYSICSPRESTEP: called before the simulation iteration. Happens at a fixed rate (the physics FPS.) If fixed timestep logic updates are needed, this is a good event to listen to.
- E_PHYSICSPOSTSTEP: called after the simulation iteration. Happens at the same rate as E_PHYSICSPRESTEP.
- E_SMOOTHINGUPDATE: update SmoothedTransform components in network client scenes.
- E_SCENEPOSTUPDATE: variable timestep scene post-update. ParticleEmitter and AnimationController update themselves as a response to this event.
Variable timestep logic updates are preferable to fixed timestep, because they are only executed once per frame. In contrast, if the rendering framerate is low, several physics simulation steps will be performed on each frame to keep up the apparent passage of time, and if this also causes a lot of logic code to be executed for each step, the program may bog down further if the CPU can not handle the load. Note that the Engine's \ref Engine::SetMinFps "minimum FPS", by default 10, sets a hard cap for the timestep to prevent spiraling down to a complete halt; if exceeded, animation and physics will instead appear to slow down.
\section MainLoop_ApplicationState Main loop and the application activation state
The application window's state (has input focus, minimized or not) can be queried from the Input subsystem. It can also effect the main loop in the following ways:
- Rendering is always skipped when the window is minimized.
- To avoid spinning the CPU and GPU unnecessarily, it is possible to define a smaller maximum FPS when no input focus. See \ref Engine::SetMaxInactiveFps "SetMaxInactiveFps()"
- It is also possible to automatically pause update events and audio when the window is minimized. Use \ref Engine::SetPauseMinimized "SetPauseMinimized()" to control this behaviour. By default it is not enabled on desktop, and enabled on mobile devices (Android and iOS/tvOS). For singleplayer games this is recommended to avoid unwanted progression while away from the program. However in a multiplayer game this should not be used, as the missing scene updates would likely desync the client with the server.
- On mobile devices the window becoming minimized can mean that it will never become maximized again, in case the OS decides it needs to free memory and kills your program. Therefore you should listen for the E_INPUTFOCUS event from the Input subsystem and immediately save your program state as applicable if the program loses input focus or is minimized.
- On mobile devices it is also unsafe to access or create any graphics resources while the window is minimized (as the graphics context may be destroyed during this time); doing so can crash the program. It is recommended to leave the pause-minimized feature on to ensure you do not have to check for this in your update code.
Note that on iOS/tvOS calling \ref Engine::Exit "Exit()" is a no-op as there is no officially sanctioned way to manually exit your program. On Android it will cause the activity to manually exit.
\section MainLoop_ApplicationFramework Application framework
The Application class provides a minimal framework for a Urho3D C++ application with a main loop. It has virtual functions Setup(), Start() and Stop() which can be defined by the application subclass. The header file also provides a macro for defining a program entry point, which
will instantiate the Context object and then the user-specified application class. A minimal example, which would just display a blank rendering window and exit by pressing ESC:
\code
#include <Urho3D/Engine/Application.h>
#include <Urho3D/Engine/Engine.h>
#include <Urho3D/Input/InputEvents.h>
using namespace Urho3D;
class MyApp : public Application
{
public:
MyApp(Context* context) :
Application(context)
{
}
virtual void Setup()
{
// Called before engine initialization. engineParameters_ member variable can be modified here
}
virtual void Start()
{
// Called after engine initialization. Setup application & subscribe to events here
SubscribeToEvent(E_KEYDOWN, URHO3D_HANDLER(MyApp, HandleKeyDown));
}
virtual void Stop()
{
// Perform optional cleanup after main loop has terminated
}
void HandleKeyDown(StringHash eventType, VariantMap& eventData)
{
using namespace KeyDown;
// Check for pressing ESC. Note the engine_ member variable for convenience access to the Engine object
int key = eventData[P_KEY].GetInt();
if (key == KEY_ESCAPE)
engine_->Exit();
}
};
URHO3D_DEFINE_APPLICATION_MAIN(MyApp)
\endcode
\page SceneModel Scene model
Urho3D's scene model can be described as a component-based scene graph. The Scene consists of a hierarchy of scene nodes, starting from the root node, which also represents the whole scene. Each Node has a 3D transform (position, rotation and scale), a name and an ID + optionally tag(s) and a freeform VariantMap for \ref Node::GetVars "user variables", but no other functionality.
\section SceneModel_Components Components
Rendering 3D objects, sound playback, physics and scripted logic updates are all enabled by creating different \ref Component "Components" into the nodes by calling \ref Node::CreateComponent "CreateComponent()". As with events, in C++ components are identified by type name hashes, and template forms of the component creation and retrieval functions exist for convenience. For example:
\code
Light* light = node->CreateComponent<Light>();
\endcode
In script, strings are used to identify component types instead, so the same code would look like:
\code
Light@ light = node.CreateComponent("Light");
\endcode
Because components are created using \ref ObjectTypes "object factories", a factory must be registered for each component type.
Components created into the Scene itself have a special role: to implement scene-wide functionality. They should be created before all other components, and include the following:
- Octree: implements spatial partitioning and accelerated visibility queries. Without this 3D objects can not be rendered.
- PhysicsWorld: implements physics simulation. Physics components such as RigidBody or CollisionShape can not function properly without this.
- DebugRenderer: implements debug geometry rendering.
"Ordinary" components like Light, Camera or StaticModel should not be created directly into the Scene, but rather into child nodes.
\section SceneModel_Identification Identification and queries
Nodes can be queried by name from the Scene (or any parent node) with the function \ref Node::GetChild "GetChild()". The query can be optionally recursive, meaning it traverses into child hierarchies. This is relatively slow, since string compares are involved.
Unlike nodes, components do not have names; components inside the same node are only identified by their type, and index in the node's component list, which is filled in creation order. See the various overloads of \ref Node::GetComponent "GetComponent()" or \ref Node::GetComponents "GetComponents()" for details.
When created, both nodes and components get scene-global integer IDs. They can be queried from the Scene by using the functions \ref Scene::GetNode "GetNode()" and \ref Scene::GetComponent "GetComponent()". This is much faster than for example doing recursive name-based scene node queries.
%String tags can be optionally assigned into scene nodes to aid in identification. See e.g. the functions \ref Node::AddTag "AddTag()", \ref Node::RemoveTag "RemoveTag()" and \ref Node::SetTags "SetTags()". Nodes with a specific tag can be queried from the Scene by calling the \ref Scene::GetNodesWithTag "GetNodesWithTag()" function.
\section SceneModel_Hierarchy Scene hierarchy
There is no inbuilt concept of an entity or a game object; rather it is up to the programmer to decide the node hierarchy, and in which nodes to place any logic. Typically, free-moving objects in the 3D world would be created as children of the root node. Nodes can be created either with or without a name, see \ref Node::CreateChild "CreateChild()". Uniqueness of node names is not enforced.
Whenever there is some hierarchical composition, it is recommended (and in fact necessary, because components do not have their own 3D transforms) to create a child node. For example if a character was holding an object in his hand, the object should have its own node, which would be parented to the character's hand bone (also a Node.) The exception is the physics CollisionShape, which can be offsetted and rotated individually in relation to the node. See \ref Physics "Physics" for more details. Note that Scene's own transform is purposefully ignored as an optimization when calculating world derived transforms of child nodes, so changing it has no effect and it should be left as it is (position at origin, no rotation, no scaling.)
%Scene nodes can be freely reparented. In contrast components are always created to the node they belong to, and can not be moved between nodes. Both child nodes and components are stored using SharedPtr containers; this means that detaching a child node from its parent or removing a component will also destroy it, if no other references to it exist. Both Node & Component provide the \ref Node::Remove "Remove()" function to accomplish this without having to go through the parent. Note that no operations on the node or component in question are safe after calling that function.
It is also legal to create a Node that does not belong to a scene. This is useful for example with a camera moving in a scene that may be loaded or saved, because then the camera will not be saved along with the actual scene, and will not be destroyed when the scene is loaded.
However, depending on the components used, creating components to a node outside the scene, then moving the node to a scene later may not work completely as expected. For example, a RigidBody component can not store its velocities if it does not have access to the scene's physics world component to actually create the Bullet rigid body object.
\section SceneModel_Update Scene updates
A Scene whose updates are enabled (default) will be automatically updated on each main loop iteration. See \ref Scene::SetUpdateEnabled "SetUpdateEnabled()".
Nodes and components can be excluded from the scene update by disabling them, see \ref Node::SetEnabled "SetEnabled()". Disabling for example a drawable component also makes it invisible, a sound source component becomes inaudible etc. If a node is disabled, all of its components are treated as disabled regardless of their own enable/disable state.
\section SceneModel_Logic Creating logic functionality
To implement your game logic you typically either create script objects (when using scripting) or new components (when using C++). %Script objects exist in a C++ placeholder component, but can be basically thought of as components themselves. For a simple example to get you started, check the 05_AnimatingScene sample, which creates a Rotator object to scene nodes to perform rotation on each frame update.
Unless you have extremely serious reasons for doing so, you should not subclass the Node class in C++ for implementing your own logic. Doing so will theoretically work, but has the following drawbacks:
- Loading and saving will not work properly without changes. It assumes that the root node is a %Scene, and all the child nodes are of the %Node class. It will not know how to instantiate your custom subclass.
- The Editor does not know how to edit your subclass.
\section SceneModel_LoadSave Loading and saving scenes
Scenes can be loaded and saved in either binary, JSON, or XML formats; see the functions \ref Scene::Load "Load()", \ref Scene::LoadXML "LoadXML()", \ref Scene::LoadJSON "LoadJSON", \ref Scene::Save "Save()" and \ref Scene::SaveXML "SaveXML()", and \ref Scene::SaveJSON "SaveJSON()". See \ref Serialization
"Serialization" for the technical details on how this works. When a scene is loaded, all existing content in it (child nodes and components) is removed first.
Nodes and components that are marked temporary will not be saved. See \ref Serializable::SetTemporary "SetTemporary()".
To be able to track the progress of loading a (large) scene without having the program stall for the duration of the loading, a scene can also be loaded asynchronously. This means that on each frame the scene loads resources and child nodes until a certain amount of milliseconds has been exceeded. See \ref Scene::LoadAsync "LoadAsync()" and \ref Scene::LoadAsyncXML "LoadAsyncXML()". Use the functions \ref Scene::IsAsyncLoading "IsAsyncLoading()" and \ref Scene::GetAsyncProgress "GetAsyncProgress()" to track the loading progress; the latter returns a float value between 0 and 1, where 1 is fully loaded. The scene will not update or render before it is fully loaded.
\section SceneModel_Instantiation Object prefabs
Just loading or saving whole scenes is not flexible enough for eg. games where new objects need to be dynamically created. On the other hand, creating complex objects and setting their properties in code will also be tedious. For this reason, it is also possible to save a scene node (and its child nodes, components and attributes) to either binary, JSON, or XML to be able to instantiate it later into a scene. Such a saved object is often referred to as a prefab. There are three ways to do this:
- In code by calling \ref Node::Save "Save()", \ref Node::SaveJSON "SaveJSON()", or \ref Node::SaveXML "SaveXML()" on the Node in question.
- In the editor, by selecting the node in the hierarchy window and choosing "Save node as" from the "File" menu.
- Using the "node" command in AssetImporter, which will save the scene node hierarchy and any models contained in the input asset (eg. a Collada file)
To instantiate the saved node into a scene, call \ref Scene::Instantiate "Instantiate()", \ref Scene::InstantiateJSON() or \ref Scene::InstantiateXML "InstantiateXML()" depending on the format. The node will be created as a child of the Scene but can be freely reparented after that. Position and rotation for placing the node need to be specified. The NinjaSnowWar example uses XML format for its object prefabs; these exist in the bin/Data/Objects directory.
\section SceneModel_Events Scene graph events
The Scene object sends events on scene graph modification, such as nodes or components being added or removed, the enabled status of a node or component being
changed, or name or tags being changed. These are used in the Editor to implement keeping the scene hierarchy window up to date. See the include file
SceneEvents.h. Note that when a node is removed from the scene, individual component removals are not signaled.
\section SceneModel_FurtherInformation Further information
For more information on the component-based scene model, see for example http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/. Note that the Urho3D scene model is not a pure Entity-Component-System design, which would have the components just as bare data containers, and only systems acting on them. Instead the Urho3D components contain logic of their own, and actively communicate with the systems (such as rendering, physics or script engine) they depend on.
\page Resources Resources
Resources include most things in Urho3D that are loaded from mass storage during initialization or runtime:
- Animation
- Image
- Model
- Material
- ParticleEffect
- ScriptFile
- Shader
- Sound
- Technique
- Texture2D
- Texture2DArray
- Texture3D
- TextureCube
- XMLFile
- JSONFile
They are managed and loaded by the ResourceCache subsystem. Like with all other \ref ObjectTypes "typed objects", resource types are identified by 32-bit type name hashes (C++) or type names (script). An object factory must be registered for each resource type.
The resources themselves are identified by their file paths, relative to the registered resource directories or \ref PackageFile "package files". By default, the engine registers the resource directories Data and CoreData, or the packages Data.pak and CoreData.pak if they exist.
If loading a resource fails, an error will be logged and a null pointer is returned.
Typical C++ example of requesting a resource from the cache, in this case, a texture for a UI element. Note the use of a convenience template argument to specify the resource type, instead of using the type hash.
\code
healthBar->SetTexture(GetSubsystem<ResourceCache>()->GetResource<Texture2D>("Textures/HealthBarBorder.png"));
\endcode
The same in script would look like this (note the use of a property instead of a setter function):
\code
healthBar.texture = cache.GetResource("Texture2D", "Textures/HealthBarBorder.png");
\endcode
Resources can also be created manually and stored to the resource cache as if they had been loaded from disk.
Memory budgets can be set per resource type: if resources consume more memory than allowed, the oldest resources will be removed from the cache if not in use anymore. By default the memory budgets are set to unlimited.
\section Resources_Background Background loading of resources
Normally, when requesting resources using \ref ResourceCache::GetResource "GetResource()", they are loaded immediately in the main thread, which may take several milliseconds for all the required steps (load file from disk,
parse data, upload to GPU if necessary) and can therefore result in framerate drops.
If you know in advance what resources you need, you can request them to be loaded in a background thread by calling \ref ResourceCache::BackgroundLoadResource "BackgroundLoadResource()". The event E_RESOURCEBACKGROUNDLOADED will be sent after the loading is complete; it will tell if the loading actually was a success or a failure. Depending on the resource, only a part of the loading process may be moved to a background thread, for example the finishing GPU upload step always needs to happen in the main thread. Note that if you call GetResource() for a resource that is queued for background loading, the main thread will stall until its loading is complete.
The asynchronous scene loading functionality \ref Scene::LoadAsync "LoadAsync()", \ref Scene::LoadAsyncJSON "LoadAsyncJSON()" and \ref Scene::LoadAsyncXML "LoadAsyncXML()" have the option to background load the resources first before proceeding to load the scene content. It can also be used to only load the resources without modifying the scene, by specifying the LOAD_RESOURCES_ONLY mode. This allows to prepare a scene or object prefab file for fast instantiation.
Finally the maximum time (in milliseconds) spent each frame on finishing background loaded resources can be configured, see \ref ResourceCache::SetFinishBackgroundResourcesMs "SetFinishBackgroundResourcesMs()".
\section Resources_BackgroundImplementation Implementing background loading
When writing new resource types, the background loading mechanism requires implementing two functions: \ref Resource::BeginLoad "BeginLoad()" and \ref Resource::EndLoad "EndLoad()". BeginLoad() is potentially called in a background thread and should do as much work (such as file I/O) as possible without violating the \ref Multithreading "multithreading" rules. EndLoad() should perform the main thread finishing step, such as GPU upload. Either step can return false to indicate failure to load the resource.
If a resource depends on other resources, writing efficient threaded loading for it can be hard, as calling GetResource() is not allowed inside BeginLoad() when background loading. There are a few options: it is allowed to queue new background load requests by calling BackgroundLoadResource() within BeginLoad(), or if the needed resource does not need to be permanently stored in the cache and is safe to load outside the main thread (for example Image or XMLFile, which do not possess any GPU-side data), \ref ResourceCache::GetTempResource "GetTempResource()" can be called inside BeginLoad.
\page Localization Localization
The Localization subsystem provides a simple way to creating multilingual applications.
\section LocalizationInit Initialization
Before using the subsystem, the localization string collection(s) need to be loaded. A common practice is to do this at application startup. Multiple collection files can be loaded, and each can define either just one or several languages. For example:
\code
Localization* l10n = GetSubsystem<Localization>();
// 1st JSON file format
l10n->LoadJSONFile("StringsEnRu.json");
l10n->LoadJSONFile("StringsDe.json");
// 2nd JSON file format
l10n->LoadJSONFile("StringsLv.json", "lv");
\endcode
JSON files must be in UTF8 encoding without BOM. Sample files are in the bin/Data directory. The JSON files must have one of the following formats:
1. `LoadJSONFile("StringsEnRu.json")` method will automatically pick the language from the JSON file
\code
{
"string id 1":{
"language 1":"value11",
"language 2":"value12",
"language 3":"value13"
},
"string id 2":{
"language 1":"value21",
"language 2":"value22",
"language 3":"value23"
}
}
\endcode
2. You must pass 2nd argument to the `LoadJSONFile("StringsLv.json", "lv")` method to tell the parser which language will be loaded
\code
{
"string id 1": "value 1",
"string id 2": "value 2",
"string id 3": "value 3"
}
\endcode
Any number of languages can be defined. Remember that language names and string identifiers are case sensitive. "En" and "en" are considered different languages.
During the loading process languages are numbered in order of finding. Indexing starts from zero. The first found language is set to be initially active.
\section LocalizationUsing Using
The Get function returns a string with the specified string identifier in the current language.
\code
Text* t = new Text(context_);
t->SetName("Text1");
Localization* l10n = GetSubsystem<Localization>();
t->SetText(l10n->Get("string 1"));
\endcode
If the string id is empty, an empty string will be returned. If the translation is not found, the id will be returned unmodified and a warning will be logged.
Use SetLanguage() function to change language at runtime.
\code
Localization* l10n = GetSubsystem<Localization>();
l10n->SetLanguage("language 2");
\endcode
When the language is changed, the E_CHANGELANGUAGE event will be sent. Subscribe to it to perform the necessary relocalization of your user interface (texts, sprites etc.)
\code
SubscribeToEvent(E_CHANGELANGUAGE, URHO3D_HANDLER(Game, HandleChangeLanguage));
void Game::HandleChangeLanguage(StringHash eventType, VariantMap& eventData)
{
Localization* l10n = GetSubsystem<Localization>();
...
Text* t = static_cast<Text*>(uiRoot->GetChild("Text1", true));
t->SetText(l10n->Get("string 1"));
}
\endcode
Text %UI elements also support automatic translation to avoid manual work.
\code
Text* t2 = new Text(context_);
t2->SetText("string 2");
t2->SetAutoLocalizable(true);
\endcode
Wherein text value is used as an identifier.
Also see the example 40_Localization.
\page Scripting Scripting
To enable AngelScript scripting support, the Script subsystem needs to be created and registered after initializing the Engine. This is accomplished by the following code, seen eg. in Tools/Urho3DPlayer/Urho3DPlayer.cpp:
\code
context_->RegisterSubsystem(new Script(context_));
\endcode
There are three ways the AngelScript language can be interacted with in Urho3D:
\section Scripting_Immediate Immediate execution
Immediate execution takes one line of AngelScript, compiles it, and executes. This is not recommended for anything that needs high performance, but can be used for example to implement a developer console. Call the Script subsystem's \ref Script::Execute "Execute()" function to use. For example:
\code
GetSubsystem<Script>()->Execute("Print(\"Hello World!\");");
\endcode
It may be useful to be able to access a specific scene or a script file while executing immediate script code. These can be set on the Script subsystem by calling \ref Script::SetDefaultScene "SetDefaultScene()" and \ref Script::SetDefaultScriptFile "SetDefaultScriptFile()".
\section Scripting_Procedural Calling a function from a script file
This requires a successfully loaded ScriptFile resource, whose \ref ScriptFile::Execute "Execute()" function will be used. To identify the function to be called, its full declaration is needed. Parameters are passed in a VariantVector. For example:
\code
ScriptFile* file = GetSubsystem<ResourceCache>()->GetResource<ScriptFile>("Scripts/MyScript.as");
VariantVector parameters;
parameters.Push(Variant(100)); // Add an int parameter
file->Execute("void MyFunction(int)", parameters); // Execute
\endcode
If the function being called has void return type and no parameters, its name can alternatively be given instead of the full declaration.
\ref ScriptFile::Execute "Execute()" also has an overload which takes a function pointer instead of querying by declaration. Using a pointer is naturally faster than a query, but also more risky: in case the ScriptFile resource is unloaded or reloaded, any function pointers will be invalidated.
\section Scripting_Object Instantiating a script object
The component ScriptInstance can be used to instantiate a specific class from within a script file. After instantiation, the the script object can respond to scene updates, \ref Events "events" and \ref Serialization "serialization" much like a component written in C++ would do, if it has the appropriate methods implemented. For example:
\code
ScriptInstance* instance = node->CreateComponent<ScriptInstance>();
instance->CreateObject(GetSubsystem<ResourceCache>()->GetResource<ScriptFile>("Scripts/MyClass.as"), "MyClass");
\endcode
The class must implement the empty interface ScriptObject to make its base class statically known. This enables accessing any script object in the scene using ScriptInstance's \ref ScriptInstance::GetScriptObject "GetScriptObject()" function.
The following methods that implement the component behaviour will be checked for. None of them are required.
- void Start()
- void Stop()
- void DelayedStart()
- void Update(float)
- void PostUpdate(float)
- void FixedUpdate(float)
- void FixedPostUpdate(float)
- void Save(Serializer&)
- void Load(Deserializer&)
- void WriteNetworkUpdate(Serializer&)
- void ReadNetworkUpdate(Deserializer&)
- void ApplyAttributes()
- void TransformChanged()
The update methods above correspond to the variable timestep scene update and post-update, and the fixed timestep physics world update and post-update. The application-wide update events are not handled by default.
The Start() and Stop() methods do not have direct counterparts in C++ components. Start() is called just after the script object has been created. Stop() is called just before the script object is destroyed. This happens when the ScriptInstance is destroyed, or if the script class is changed.
When a scene node hierarchy with script objects is instantiated (such as when loading a scene) any child nodes may not have been created yet when Start() is executed, and can thus not be relied upon for initialization. The DelayedStart() method can be used in this case instead: if defined, it is called immediately before any of the Update() calls.
TransformChanged() is called whenever the scene node transform changes and the node was not dirty before, similar to C++ components' OnMarkedDirty() function. The function should read the node's world transform (or rotation / position / scale) to reset the dirty status and ensure the next dirty notification is also sent.
Subscribing to \ref Events "events" in script behaves differently depending on whether \ref Object::SubscribeToEvent "SubscribeToEvent()" is called from a script object's method, or from a procedural script function. If called from an instantiated script object, the ScriptInstance becomes the event receiver on the C++ side, and calls the specified handler method when the event arrives. If called from a function, the ScriptFile will be the event receiver and the handler must be a free function in the same script file. The third case is if the event is subscribed to from a script object that does not belong to a ScriptInstance. In that case the ScriptFile will create a proxy C++ object on demand to be able to forward the event to the script object.
The script object's enabled state can be controlled through the \ref ScriptInstance::SetEnabled "SetEnabled()" function. When disabled, the scripted update methods or event handlers will not be called. This can be used to reduce CPU load in a large or densely populated scene.
There are shortcut methods on the script side for creating and accessing a node's script object: node.CreateScriptObject() and node.GetScriptObject(). Alternatively, if the node has only one ScriptInstance, and a specific class is not needed, the node's scriptObject property can also be used. CreateScriptObject() takes the script file name (or alternatively, a ScriptFile object handle) and class name as parameters and creates a ScriptInstance component automatically, then creates the script object. For example:
\code
ScriptObject@ object = node.CreateScriptObject("Scripts/MyClass.as", "MyClass");
\endcode
Note that these are not actual Node member functions on the C++ side, as the %Scene classes are not allowed to depend on scripting.
\section Scripting_ObjectSerialization Script object serialization
After instantiation, the script object's public member variables that can be converted into Variant, and that don't begin with an underscore are automatically available as attributes of the ScriptInstance, and will be serialized.
Node and Component handles are also converted into nodeID and componentID attributes automatically. Note: this automatic attribute mechanism means that a ScriptInstance's attribute list changes dynamically depending on the class that has been instantiated.
If the script object contains more complex data structures, you can also serialize and deserialize into a binary buffer manually by implementing the Load() and Save() methods.
%Network replication of the script object variables must be handled manually by implementing WriteNetworkUpdate() and ReadNetworkUpdate() methods, that also write and read a binary buffer. These methods should write/read all replicated of variables of the object. Additionally, the ScriptInstance must be marked for network replication by calling MarkNetworkUpdate() whenever the replicated data changes. Because this replication mechanism can not sync per variable, but always sends the whole binary buffer if even one bit of the data changes, also consider using the automatically replicated node user variables.
\section Script_DelayedCalls Delayed method calls
Delayed method calls can be used in script objects to implement time-delayed actions. Use the DelayedExecute() function in script object code to add a method to be executed later. The parameters are the delay in seconds, repeat flag, the full declaration of the function, and optionally parameters, which must be placed in a Variant array. For example:
\code
class Test : ScriptObject
{
void Start()
{
Array<Variant> parameters;
parameters.Push(Variant(100));
DelayedExecute(1.0, false, "void Trigger(int)", parameters);
}
void Trigger(int parameter)
{
Print("Delayed function triggered with parameter " + parameter);
}
}
\endcode
Delayed method calls can be removed by declaration using the ClearDelayedExecute() function. If an empty declaration (default) is given as parameter, all delayed calls are removed.
If the method being called has void return type and no parameters, its name can alternatively be given instead of the full declaration.
When a scene is saved/loaded, any pending delayed calls are also saved and restored.
\section Script_ScriptAPI The script API
Much of the Urho3D classes are exposed to scripts, however things that require low-level access or high performance (like direct low level rendering) are not. Also for scripting convenience some things have been changed from the C++ API:
- The template array and string classes are exposed as Array<type> and String.
- Public member variables are exposed without the underscore appended. For example x, y, z in Vector3.
- Whenever only a single parameter is needed, setter and getter functions are replaced with properties. Such properties start with a lowercase letter. If an index parameter is needed, the property will be indexed. Indexed properties are in plural.
- The element count property of arrays and other dynamic structures such as VariantMap and ResourceRefList is called "length", though the corresponding C++ function is usually Size().
- Subsystems exist as global properties: time, fileSystem, log, cache, network, input, ui, audio, engine, graphics, renderer, script, console, debugHud.
- Additional global properties exist for accessing the script object's node, the scene and the scene-wide components: node, scene, octree, physicsWorld, debugRenderer. When an object method is not executing, these are null. An exception: when the default scene for immediate execution has been set by calling \ref Script::SetDefaultScene "SetDefaultScene()", it is always available as "scene".
- The currently executing script object's ScriptInstance component is available through the global property self.
- The currently executing script file is available through the global property scriptFile.
- The first script object created to a node is available as its scriptObject property.
- Printing raw output to the log is simply called Print(). The rest of the logging functions are accessed by calling log.Debug(), log.Info(), log.Warning() and log.Error().
- Functions that would take a StringHash parameter usually take a string instead. For example sending events, requesting resources and accessing components.
- Most of StringUtils have been exposed as methods of the string class. For example String.ToBool().
- Template functions for getting components or resources by type are not supported. Instead automatic type casts are performed as necessary.
Check the automatically built \ref ScriptAPI "Scripting API" documentation for the exact function signatures. Note that the API documentation can be regenerated to the Urho3D log file by calling \ref Script::DumpAPI "DumpAPI()" function on the Script subsystem or by using \ref Tools_ScriptCompiler "ScriptCompiler tool".
\section Script_Bytecode Precompiling scripts to bytecode
Instead of compiling scripts from source on-the-fly during startup, they can also be precompiled to bytecode, then loaded. Use the \ref Tools_ScriptCompiler "ScriptCompiler" utility for this.
The Script subsystem will automatically redirect script file resource requests (.as) to the compiled versions (.asc) if the .as file does not exist. Making a final build of a scripted application could therefore involve compiling all the scripts with ScriptCompiler, then deleting the original .as files from the build.
\section Scripting_Limitations Limitations
There are some complexities of the scripting system one has to watch out for:
- During the execution of the script object's constructor, the object is not yet associated with the ScriptInstance, and therefore subscribing to events, adding delayed method calls, or trying to access the node or scene will fail. The use of the constructor is best reserved for initializing member variables only.
- When the resource request for a particular ScriptFile is initially made, the script file and the files it includes are compiled into an AngelScript script module. Each script module has its own class hierarchy that is not usable from other script modules, unless the classes are declared shared. See AngelScript documentation for more details.
- If a ScriptFile resource is reloaded, all the script objects created from it will be destroyed, then recreated. They will lose any stored state as their constructors and Start() methods will be run again. This is rarely useful when running an actual game, but may be helpful during development.
A global VariantMap, globalVars, can be accessed by all scripts to store shared data or to preserve data through script file reloads.
\section Scripting_Modifications AngelScript modifications
The following changes have been made to AngelScript in Urho3D:
- For performance reasons and to guarantee immediate removal of expired objects, AngelScript garbage collection has been disabled for script classes and the Array type. This has the downside that circular references will not be detected. Therefore, whenever you have object handles in your script, think of them as if they were C++ shared pointers and avoid creating circular references with them. For safety, consider using the value type WeakHandle, which is a WeakPtr<RefCounted> exposed to script and can be used to point to any engine object (but not to script objects.) An example of using WeakHandle:
\code
WeakHandle rigidBodyWeak = node.CreateComponent("RigidBody");
RigidBody@ rigidBodyShared = rigidBodyWeak.Get(); // Is null if expired
\endcode
- %Object handle assignment can be done without the @ symbol if the object in question does not support value assignment. All exposed Urho3D C++ classes that derive from RefCounted never support value assignment. For example, when assigning the Model and Material of a StaticModel component:
\code
object.model = cache.GetResource("Model", "Models/Mushroom.mdl");
object.material = cache.GetResource("Material", "Materials/Mushroom.xml");
\endcode
In unmodified AngelScript, this would have to be written as:
\code
@object.model = cache.GetResource("Model", "Models/Mushroom.mdl");
@object.material = cache.GetResource("Material", "Materials/Mushroom.xml");
\endcode
\page LuaScripting Lua scripting
Lua scripting in Urho3D has its dedicated LuaScript subsystem that must be instantiated before the scripting capabilities can be used. Lua support is not compiled in by default but must be enabled by the CMake
build option -DURHO3D_LUA=1. For more details see \ref Build_Options "Build options". Instantiating the subsystem is done like this:
\code
context_->RegisterSubsystem(new LuaScript(context_));
\endcode
Like AngelScript, Lua scripting supports immediate compiling and execution of single script lines, loading script files and executing procedural functions from them, and instantiating script objects
to scene nodes using the LuaScriptInstance component.
\section LuaScripting_Immediate Immediate execution
Use \ref LuaScript::ExecuteString "ExecuteString()" to compile and run a line of Lua script. This should not be used for performance-critical operations.
\section LuaScripting_ScriptFiles Script files and functions
In contrast to AngelScript modules, which exist as separate entities and do not share functions or variables unless explicitly marked shared, in the Lua subsystem everything is loaded and executed in one Lua state, so scripts can naturally access everything loaded so far. To load and execute a Lua script file, call \ref LuaScript::ExecuteFile "ExecuteFile()".
After that, the functions in the script file are available for calling. Use \ref LuaScript::GetFunction "GetFunction()" to get a Lua function by name. This returns a LuaFunction object, on which you should call \ref LuaFunction::BeginCall "BeginCall()" first, followed by pushing the function parameters if any, and finally execute the function with \ref LuaFunction::EndCall "EndCall()".
\subsection LuaScripting_Debugging Debugging script files
Debugging Lua scripts embedded in an application can be done by attaching to a remote debugger, after first injecting a client into the application (for example, see <a href="https://wiki.eclipse.org/LDT/User_Area/User_Guides/User_Guide_1.2#Attach_session">eclipse LDT's remote debugger</a>).
However, Lua script files in Urho3D are loaded into the interpreter via Urho3D's resource cache, which loads the script file into a memory buffer before passing that buffer to the interpreter. This is good for performance and cross platform compatibility, but means that the source file is not available to debuggers and so, for example, breakpoints may not work and the code cannot be meaningfully stepped through.
For a single script that you wish to step through, you can use \ref LuaScript::ExecuteRawFile "ExecuteRawFile()", which will load the script from the file system directly into the Lua interpreter, making the source available to debuggers that rely on it. There are a couple of caveats with this:
- The file has has to be on the file system, within a resource directory, and not packaged.
- If the script uses require() to import a second script, then that second script will not be available for the debugger in the same way, since internally the second script is passed to Lua via the resource cache.
To get around the second caveat, and avoid changing method calls, use the URHO3D_LUA_RAW_SCRIPT_LOADER build option. This will force Urho3D to attempt to load scripts from the file system by default, before falling back on the resource cache. You can then use \ref LuaScript::ExecuteFile "ExecuteFile()", as above, and disable the CMake option for production if required.
\section LuaScripting_ScriptObjects Script objects
By using the LuaScriptInstance component, Lua script objects can be added to scene nodes. After the component has been created, there are two ways to specify the object to instantiate: either specifying both the script file name and the object class name, in which case the script file is loaded and executed first, or specifying only the class name, in which case the Lua code containing the class definition must already have been executed. An example of creating a script object in C++ from the LuaIntegration sample, where a class called Rotator is instantiated from the script file Rotator.lua:
\code
LuaScriptInstance* instance = node->CreateComponent<LuaScriptInstance>();
instance->CreateObject("LuaScripts/Utilities/Rotator.lua", "Rotator");
\endcode
After instantiation, use \ref LuaScriptInstance::GetScriptObjectFunction "GetScriptObjectFunction()" to get the object's functions by name; calling happens like above.
Like their AngelScript counterparts, script object classes can define functions which are automatically called by LuaScriptInstance for operations like initialization, scene update, or load/save. These functions are listed below. Refer to the \ref Scripting "AngelScript scripting" page for details.
- Start()
- Stop()
- Update(timeStep)
- PostUpdate(timeStep)
- FixedUpdate(timeStep)
- FixedPostUpdate(timeStep)
- Save(serializer)
- Load(deserializer)
- WriteNetworkUpdate(serializer)
- ReadNetworkUpdate(deserializer)
- ApplyAttributes()
- TransformChanged()
\section LuaScripting_Events Event handling
Like in AngelScript, both procedural and object event handling is supported. In procedural event handling the LuaScript subsystem acts as the event receiver on the C++ side, and forwards the event to a Lua function. Use SubscribeToEvent and give the event name and the function to use as the handler. Optionally a specific sender object can be given as the first argument instead. For example, subscribing to the application-wide Update event, and getting its timestep parameter in the event handler function.
\code
SubscribeToEvent("Update", "HandleUpdate")
...
function HandleUpdate(eventType, eventData)
local timeStep = eventData["TimeStep"]:GetFloat()
...
end
\endcode
When subscribing a script object to receive an event, use the form self:SubscribeToEvent() instead. The function to use as the handler is given as "ClassName:FunctionName". For example subscribing to the NodeCollision physics event, and getting the participating other scene node and the contact point VectorBuffer in the handler function. Note that in Lua retrieving an object pointer from a VariantMap requires the object type as the first parameter:
\code
CollisionDetector = ScriptObject()
function CollisionDetector:Start()
self:SubscribeToEvent(self.node, "NodeCollision", "CollisionDetector:HandleNodeCollision")
end
function CollisionDetector:HandleNodeCollision(eventType, eventData)
local otherNode = eventData["OtherNode"]:GetPtr("Node")
local contacts = eventData["Contacts"]:GetBuffer()
...
end
\endcode
\section LuaScripting_API The script API
The binding of Urho3D C++ classes is accomplished with the tolua++ library, which for the most part binds the exact same function parameters as C++. Compared to the AngelScript API, you will always have the classes' Get / Set functions available, but in addition convenience properties also exist.
As seen above from the event handling examples, VariantMap handling is similar to both C++ and AngelScript. To get a variant object back from a map, index the map by its key as a string. A nil value is returned when the map's key does not exist. Then use one of the variant getter method to return the actual Lua object stored inside the variant object. These getter methods normally do not take any parameter, except GetPtr() and GetVoidPtr() which take a string parameter representing a Lua user type that the method would use to cast the return object into. The GetPtr() is used to get a reference counted object while the GetVoidPtr() is used to get a POD value object.
You can also use the VariantMap as a pseudo Lua table to store any variant value objects in your script. The VariantMap class would try its best to convert any Lua object into a variant object and store the variant object using the provided key as index. The key can be a string or an unsigned integer or even a StringHash object. When a particular data type conversion is not being supported yet, an empty variant object would be stored instead. So, be careful if you are using this feature. You can also use one of the Variant class constructors to construct a %Variant object first before assigning it to the VariantMap, but this operation would be slower than direct conversion. The purpose of using VariantMap in this way is to facilitate objects passing between Lua and C++ as has been shown in the event handling mechanism above. When creating objects on Lua side, you have to make sure they are not garbage collected by Lua while there are still references pointing to them on C++ side, especially when the objects are not reference counted.
\code
local myMap = VariantMap()
myMap[1] = Spline(LINEAR_CURVE) -- LINEAR_CURVE = 2
print(myMap[1].typeName, myMap[1]:GetVoidPtr("Spline").interpolationMode)
-- output: VoidPtr 2
myMap["I am a table"] = { 100, 200, 255 }
print(myMap["I am a table"].typeName, myMap["I am a table"]:GetBuffer():ReadByte())
-- output: Buffer 100
print(myMap["I am a table"]:GetRawBuffer()[3], myMap["I am a table"]:GetRawBuffer()[2])
-- output: 255 200
local hash = StringHash("secret key")
myMap[hash] = Vector2(3, 4)
print(myMap[hash].typeName, myMap[hash]:GetVector2():Length())
-- output: Vector2 5
\endcode
As shown in the above example, you can either use GetRawBuffer() or GetBuffer() to get the unsigned char array stored in a variant object. It also shows that VariantMap is capable of converting a Lua table containing an array of unsigned char to a variant object stored as buffer. You may want to know that it is capable of converting a Lua table containing an array of variant objects or an array of string objects to be stored as VariantVector and StringVector, respectively, as well. It also converts any Lua primitive data types and all Urho3D classes that are exposed to Lua like all the math classes, reference counted classes, POD classes, resource reference class. etc.
Inline with C++ and AngelScript, in Lua you have to call one of the %Variant's getter method to "unbox" the actual object stored inside a %Variant object. However, specifically in Lua, there is a generic Get() method which takes advantage of Lua being type less, so the method can unbox a %Variant object and return the stored object as a type less Lua object. It takes one optional string parameter representing a Lua user type that the method would use to cast the return object into. The parameter is used for cases where a type casting is required when returning object from %Variant object storing a void pointer or a refcounted pointer. The type casting could also be optional such as for the case of requesting a %VectorBuffer to be returned for %Variant object storing an unsigned char buffer or requesting an unsigned or %StringHash to be returned for %Variant object storing an integer value. The parameter is ignored for all other cases. Following up to use the same example above, we can index the map and access the stored objects as so:
\code
print(myMap[1]:Get("Spline").interpolationMode)
print(myMap["I am a table"]:Get("VectorBuffer"):ReadByte())
print(myMap["I am a table"]:Get()[2])
print(myMap[hash]:Get():Length())
\endcode
There is also a generic Set() method for %Variant class to cope with Lua does not support assignment operator overload. The Set() method takes a single parameter which can be anything in Lua that is convertible into a %Variant, including nil value which is stored as an empty %Variant. Use this method when you need to assign a value into an existing %Variant object.
For the rest of the functions and classes, see the generated \ref LuaScriptAPI "Lua script API reference". Also, look at the Lua counterparts of the sample applications in the bin/Data/LuaScripts directory and compare them to the C++ and AngelScript versions to familiarize yourself with how things are done on the Lua side.
One more thing to note about our Lua scripting implementation is its two-way conversion between C++ collection containers (Vector and PODVector) and Lua arrays (table of non-POD and table of POD objects, respectively). The conversion is done automatically when the collection crosses the C++/Lua boundary. The generated Lua script API reference page does not reflect this fact correctly. When obtaining a collection of objects using the Lua script API, you should treat it as a Lua table despite the documentation page stated a %Vector or %PODVector user type is being returned.
\section LuaScripting_Allocation Object allocation & Lua garbage collection
There are two ways to allocate a C++ object in Lua scripting, which behave differently with respect to Lua's automatic garbage collection:
1) Call class constructor:
\code
local scene = Scene()
\endcode
tolua++ will register this C++ object with garbage collection, and Lua will collect it eventually. Do not use this form if you will add the
object to an object hierarchy that is kept alive on the C++ side with SharedPtr's, for example child scene nodes or %UI child elements.
Otherwise the object will be double-deleted, resulting in a crash.
Note that calling class constructor in this way is equivalent to calling class new_local() function.
2) Call class new() function:
\code
local text = Text:new()
\endcode
When using this form the object will not collected by Lua, so it is safe to pass into C++ object hierarchies.
Otherwise, to prevent memory leaks it needs to be deleted manually by calling the delete function on it:
\code
text:delete()
\endcode
When you call the \ref ResourceCache::GetFile "GetFile()" function of ResourceCache from Lua, the file you receive must also be manually deleted like described above once you are done with it.
\page Rendering Rendering
Much of the rendering functionality in Urho3D is built on two subsystems, Graphics and Renderer.
\section Rendering_Graphics Graphics
Graphics implements the low-level functionality:
- Creating the window and the rendering context
- Setting the screen mode
- Keeping track of GPU resources
- Keeping track of rendering context state (current rendertarget, vertex and index buffers, textures, shaders and renderstates)
- Loading shaders
- Performing primitive rendering operations
- Handling lost device
Screen resolution, fullscreen/windowed, vertical sync and hardware multisampling level are all set at once by calling Graphics's \ref Graphics::SetMode "SetMode()" function. There is also an experimental option of rendering to an existing window by passing its OS-specific handle to \ref Graphics::SetExternalWindow "SetExternalWindow()" before setting the initial screen mode.
When setting the initial screen mode, Graphics does a few checks:
- For Direct3D9, shader model 3.0 support is checked.
- For OpenGL, version 3.2 support is checked for first and used if available. As a fallback, version 2.0 with EXT_framebuffer_object, EXT_packed_depth_stencil and EXT_texture_filter_anisotropic extensions is checked for. The ARB_instanced_arrays extension is also checked for but not required; it will enable hardware instancing support when present.
- Are hardware shadow maps supported? Both AMD & NVIDIA style shadow maps can be used. If neither are available, no shadows will be rendered.
- Are light pre-pass and deferred rendering modes supported? These require sufficient multiple rendertarget support, and R32F texture format support.
\section Rendering_Renderer Renderer
Renderer implements the actual rendering of 3D views each frame, and controls global settings such as texture quality, material quality, specular lighting and shadow map base resolution.
To render, it needs a Scene with an Octree component, and a Camera that does not necessarily have to belong to the scene. The octree stores all visible components (derived from Drawable) to allow querying for them in an accelerated manner. The needed information is collected in a Viewport object, which can be assigned with Renderer's \ref Renderer::SetViewport "SetViewport()" function.
By default there is one viewport, but the amount can be increased with the function \ref Renderer::SetNumViewports "SetNumViewports()". The viewport(s) should cover the entire screen or otherwise hall-of-mirrors artifacts may occur. By specifying a zero screen rectangle the whole window will be used automatically. The viewports will be rendered in ascending order, so if you want for example to have a small overlay window on top of the main viewport, use viewport index 0 for the main view, and 1 for the overlay.
Viewports can also be defined for rendertarget textures. See \ref AuxiliaryViews "Auxiliary views" for details.
Each viewport defines a command sequence for rendering the scene, the \ref RenderPaths "render path". By default there exist forward, light pre-pass and deferred render paths in the bin/CoreData/RenderPaths directory, see \ref Renderer::SetDefaultRenderPath "SetDefaultRenderPath()" to set the default for new viewports. If not overridden from the command line, forward rendering is the default. Deferred rendering modes will be advantageous once there is a large number of per-pixel lights affecting each object, but their disadvantages are the lack of hardware multisampling and inability to choose the lighting model per material. In place of multisample antialiasing, a FXAA post-processing edge filter can be used, see the MultipleViewports sample application (bin/Data/Scripts/09_MultipleViewports.as) for an example of how to use.
The steps for rendering each viewport on each frame are roughly the following:
- Query the octree for visible objects and lights in the camera's view frustum.
- Check the influence of each visible light on the objects. If the light casts shadows, query the octree for shadowcaster objects.
- Construct render operations (batches) for the visible objects, according to the scene passes in the render path command sequence.
- Perform the render path command sequence during the rendering step at the end of the frame.
- If the scene has a DebugRenderer component and the viewport has debug rendering enabled, render debug geometry last. Can be controlled with \ref Viewport::SetDrawDebug "SetDrawDebug()", default is enabled.
In the default render paths, the rendering operations proceed in the following order:
- Opaque geometry ambient pass, or G-buffer pass in deferred rendering modes.
- Opaque geometry per-pixel lighting passes. For shadow casting lights, the shadow map is rendered first.
- (%Light pre-pass only) Opaque geometry material pass, which renders the objects with accumulated per-pixel lighting.
- Post-opaque pass for custom render ordering such as the skybox.
- Refractive geometry pass.
- Transparent geometry pass. Transparent, alpha-blended objects are sorted according to distance and rendered back-to-front to ensure correct blending.
- Post-alpha pass, can be used for 3D overlays that should appear on top of everything else.
\section Rendering_Drawable Rendering components
The rendering-related components defined by the %Graphics and %UI libraries are:
- Octree: spatial partitioning of Drawables for accelerated visibility queries. Needs to be created to the Scene (root node.)
- Camera: describes a viewpoint for rendering, including projection parameters (FOV, near/far distance, perspective/orthographic)
- Drawable: Base class for anything visible.
- StaticModel: non-skinned geometry. Can LOD transition according to distance.
- StaticModelGroup: renders several object instances while culling and receiving light as one unit.
- Skybox: a subclass of StaticModel that appears to always stay in place.
- AnimatedModel: skinned geometry that can do skeletal and vertex morph animation.
- AnimationController: drives animations forward automatically and controls animation fade-in/out.
- BillboardSet: a group of camera-facing billboards, which can have varying sizes, rotations and texture coordinates.
- ParticleEmitter: a subclass of BillboardSet that emits particle billboards.
- RibbonTrail: creates tail geometry following an object.
- Light: illuminates the scene. Can optionally cast shadows.
- Terrain: renders heightmap terrain.
- CustomGeometry: renders runtime-defined unindexed geometry. The geometry data is not serialized or replicated over the network.
- DecalSet: renders decal geometry on top of objects.
- Zone: defines ambient light and fog settings for objects inside the zone volume.
- Text3D: text that is rendered into the 3D view.
Additionally there are 2D drawable components defined by the \ref Urho2D "Urho2D" sublibrary.
\section Rendering_Optimizations Optimizations
The following techniques will be used to reduce the amount of CPU and GPU work when rendering. By default they are all on:
- Software rasterized occlusion: after the octree has been queried for visible objects, the objects that are marked as occluders are rendered on the CPU to a small hierarchical-depth buffer, and it will be used to test the non-occluders for visibility. Use \ref Renderer::SetMaxOccluderTriangles "SetMaxOccluderTriangles()" and \ref Renderer::SetOccluderSizeThreshold "SetOccluderSizeThreshold()" to configure the occlusion rendering. Occlusion testing will always be multithreaded, however occlusion rendering is by default singlethreaded, to allow rejecting subsequent occluders while rendering front-to-back.. Use \ref Renderer::SetThreadedOcclusion "SetThreadedOcclusion()" to enable threading also in rendering, however this can actually perform worse in e.g. terrain scenes where terrain patches act as occluders.
- Hardware instancing: rendering operations with the same geometry, material and light will be grouped together and performed as one draw call if supported. Note that even when instancing is not available, they still benefit from the grouping, as render state only needs to be checked & set once before rendering each group, reducing the CPU cost.
- %Light stencil masking: in forward rendering, before objects lit by a spot or point light are re-rendered additively, the light's bounding shape is rendered to the stencil buffer to ensure pixels outside the light range are not processed.
Note that many more optimization opportunities are possible at the content level, for example using geometry & material LOD, grouping many static objects into one object for less draw calls, minimizing the amount of subgeometries (submeshes) per object for less draw calls, using texture atlases to avoid render state changes, using compressed (and smaller) textures, and setting maximum draw distances for objects, lights and shadows.
\section Rendering_ReuseView Reusing view preparation
In some applications, like stereoscopic VR rendering, one needs to render a slightly different view of the world to separate viewports. Normally this results in the view preparation process (described above) being repeated for each view, which can be costly for CPU performance.
To eliminate the duplicate view preparation cost, you can use \ref Viewport::SetCullCamera "SetCullCamera()" to instruct a Viewport to use a different camera for culling than rendering. When multiple viewports share the same culling camera, the view preparation will be performed only once.
To work properly, the culling camera's frustum should cover all the views you are rendering using it, or else missing objects may be present. The culling camera should not be using the auto aspect ratio mode, to ensure you stay in full control of its view frustum.
\section Rendering_GPUResourceLoss Handling GPU resource loss
On Direct3D9 and Android OpenGL ES 2.0 it is possible to lose the rendering context (and therefore GPU resources) due to the application window being minimized to the background. Also, to work around possible GPU driver bugs the desktop OpenGL context will be voluntarily destroyed and recreated when changing screen mode or toggling between fullscreen and windowed. Therefore, on all graphics APIs one must be prepared for losing GPU resources.
Textures that have been loaded from a file, as well as vertex & index buffers that have shadowing enabled will restore their contents automatically, the rest have to be restored manually. On Direct3D9 non-dynamic (managed) textures and buffers will never be lost, as the runtime automatically backs them up to system memory.
See \ref GPUObject::IsDataLost "IsDataLost()" function in VertexBuffer, IndexBuffer, Texture2D, TextureCube and Texture3D classes for detecting data loss. Inbuilt classes such as Model, BillboardSet and Font already handle data loss for their internal GPU resources, so checking for it is only necessary for custom buffers and textures. Watch out especially for trying to render with an index buffer that has uninitialized data after a loss, as this can cause a crash inside the GPU driver due to referencing non-existent (garbage) vertices.
\section Rendering_ExtraInstanceData Defining extra instancing data
The only per-instance data that the rendering system supplies by itself are the objects' world transform matrices. If you want to define extra per-instance data in your custom Drawable subclasses, follow these steps:
- Call \ref Renderer::SetNumExtraInstancingBufferElements "SetNumExtraInstancingBufferElements()". This defines the amount of extra Vector4's (in addition to the transform matrices) that the instancing data will contain.
- The SourceBatch structure(s) of your custom Drawable need to point to the extra data. See the \ref SourceBatch::instancingData_ "instancingData_" member. Null pointer is allowed for objects that do not need to define extra data; be aware that the instancing vertex buffer will contain undefined data in that case.
- Because non-instanced rendering will not have access to the extra data, you should disable non-instanced rendering of GEOM_STATIC drawables. Call \ref Renderer::SetMinInstances "SetMinInstances()" with a parameter 1 to accomplish this.
- Use the extra data as texcoord 7 onward in your vertex shader (texcoord 4-6 are the transform matrix.)
\section Rendering_Further Further details
See also \ref VertexBuffers "Vertex buffers", \ref Materials "Materials", \ref Shaders "Shaders", \ref Lights "Lights and shadows", \ref RenderPaths "Render path", \ref SkeletalAnimation "Skeletal animation", \ref Particles "Particle systems", \ref Zones "Zones", and \ref AuxiliaryViews "Auxiliary views".
See \ref RenderingModes "Rendering modes" for detailed discussion on the forward, light pre-pass and deferred rendering modes.
See \ref APIDifferences "Differences between rendering APIs" for what to watch out for when using the low-level rendering functionality directly.
\page RenderingModes Rendering modes
The default render paths supplied with Urho3D implement forward, light pre-pass and deferred rendering modes. Where they differ is how per-pixel lighting is calculated for opaque objects; transparent objects always use forward rendering. Note that on OpenGL ES 2.0 only forward rendering is available.
\section RenderingModes_Forward Forward rendering
Forward rendering begins with an ambient light pass for the objects; this also adds any per-vertex lights. Then, the objects are re-rendered for each per-pixel light affecting them (basic multipass rendering), up to the maximum per-pixel light count which is by default unlimited, but can be reduced with \ref Drawable::SetMaxLights "SetMaxLights()". The render operations are sorted by light, ie. render the effect of the first light on all affected objects first, then the second etc. If shadow maps are re-used (default on), a shadow casting light's shadow map will be updated immediately before rendering the lit objects. When shadow maps are not re-used, all shadow maps are updated first even before drawing the ambient pass.
Materials can also define an optimization pass, called "litbase", for forward rendering where the ambient light and the first per-pixel light are combined. This pass can not be used, however, if there are per-vertex lights affecting the object, or if the ambient light has a per-vertex gradient.
\section RenderingModes_Prepass Light pre-pass rendering
%Light pre-pass requires a minimum of two passes per object. First the normal, specular power, depth and lightmask (8 low bits only) of opaque objects are rendered to the following G-buffer:
- RT0: World-space normal and specular power (D3DFMT_A8R8G8B8)
- RT1: Linear depth (D3DFMT_R32F)
- DS: Hardware depth and lightmask (D3DFMT_D24S8)
After the G-buffer is complete, light volumes (spot and point lights) or fullscreen quads (directional lights) will be rendered to a light accumulation buffer to calculate the diffuse and specular light at each opaque pixel. Specular light is stored as intensity only. Stencil compare (AND operation) with the 8 low bits of the light's lightmask will be used for light culling. Similarly to forward rendering, shadow maps will be updated before each light as necessary.
Finally the opaque objects are re-rendered during the material pass, which combines ambient and vertex lighting with per-pixel lighting from the light accumulation buffer. After this rendering proceeds to the post-opaque and refract passes, transparent object rendering pass, and the post-alpha pass, just like forward rendering.
\section RenderingModes_Deferred Deferred rendering
Deferred rendering needs to render each opaque object only once to the G-buffer, but this rendering pass is much heavier than in light pre-pass rendering, as also ambient, emissive and diffuse albedo information is output at the same time. The G-buffer is the following:
- RT0: Final rendertarget with ambient, per-vertex and emissive color (D3DFMT_X8R8G8B8)
- RT1: Diffuse albedo and specular intensity (D3DFMT_A8R8G8B8)
- RT2: World-space normal and specular power (D3DFMT_A8R8G8B8)
- RT3: Linear depth (D3DFMT_R32F)
- DS: Hardware depth and lightmask (D3DFMT_D24S8)
After the G-buffer has been rendered, light volumes will be rendered into the final rendertarget to accumulate per-pixel lighting. As the material albedo is available, all lighting calculations are final and output both the diffuse and specular color at the same time. After light accumulation rendering proceeds to post-opaque, refract, transparent, and post-alpha passes, as in other rendering modes.
\section RenderingModes_Comparison Advantages and disadvantages
Whether using forward or deferred rendering modes is more advantageous depends on the scene and lighting complexity.
If the scene contains a large number of complex objects lit by multiple lights, forward rendering quickly increases the total draw call and vertex count due to re-rendering the objects for each light. However, light pre-pass and deferred rendering have a higher fixed cost due to the generation of the G-buffer. Also, in forward per-pixel lighting more calculations (such as light direction and shadow map coordinates) can be done at the vertex shader level, while in deferred all calculations need to happen per-pixel. This means that for a low light count, for example 1-2 per object, forward rendering will run faster based on the more efficient lighting calculations alone.
Forward rendering makes it possible to use hardware multisampling and different shading models in different materials if needed, while neither is possible in the deferred modes. Also, only forward rendering allows to calculate the material's diffuse and specular light response with the most accuracy. %Light pre-pass rendering needs to reconstruct light specular color from the accumulated diffuse light color, which is inaccurate in case of overlapping lights. Deferred rendering on the other hand can not use the material's full specular color, it only stores a monochromatic intensity based on the green component into the G-buffer.
%Light pre-pass rendering has a much more lightweight G-buffer pass, but it must render all opaque geometry twice. %Light accumulation in pre-pass mode is slightly faster than in deferred. Despite this, unless there is significant overdraw, in vertex-heavy scenes deferred rendering will likely be faster than light pre-pass.
Finally note that due to OpenGL framebuffer object limitations an extra framebuffer blit has to happen at the end in both light pre-pass and deferred rendering, which costs some performance. Also, because multiple rendertargets on OpenGL must have the same format, an R32F texture can not be used for linear depth, but instead 24-bit depth is manually encoded and decoded into RGB channels.
\page APIDifferences Differences between rendering APIs
These differences need to be observed when using the low-level rendering functionality directly. The high-level rendering architecture, including the Renderer and UI subsystems and the Drawable subclasses already handle most of them transparently to the user.
- The post-projection depth range is (0,1) for Direct3D and (-1,1) for OpenGL. The Camera can be queried either for an API-specific or API-independent (Direct3D convention) projection matrix.
- To render with 1:1 texel-to-pixel mapping, on Direct3D9 UV coordinates have to be shifted a half-pixel to the right and down, or alternatively vertex positions can be shifted a half-pixel left and up. The required shift can be queried with the function \ref Graphics::GetPixelUVOffset "GetPixelUVOffset()".
- On Direct3D the depth-stencil surface can be equal or larger in size than the color rendertarget. On OpenGL the sizes must always match. Furthermore, OpenGL can not use the backbuffer depth-stencil surface when rendering to a texture. To overcome these limitations, Graphics will create correctly sized depth-stencil surfaces on demand whenever a texture is set as a color rendertarget, and a null depth-stencil is specified.
- On Direct3D9 the viewport will be reset to full size when the first color rendertarget is changed. On OpenGL & Direct3D11 this does not happen. To ensure correct operation on both APIs, always use this sequence: first set the rendertargets, then the depth-stencil surface and finally the viewport.
- On OpenGL modifying a texture will cause it to be momentarily set on the first texture unit. If another texture was set there, the assignment will be lost. Graphics performs a check to not assign textures redundantly, so it is safe and recommended to always set all needed textures before rendering.
- Modifying an index buffer on OpenGL will similarly cause the existing index buffer assignment to be lost. Therefore, always set the vertex and index buffers before rendering.
- %Shader resources are stored in different locations depending on the API: bin/CoreData/Shaders/HLSL for Direct3D, and bin/CoreData/Shaders/GLSL for OpenGL.
- To ensure similar UV addressing for render-to-texture viewports on both APIs, on OpenGL texture viewports will be rendered upside down.
- Direct3D11 is strict about vertex attributes referenced by shaders. A model will not render (input layout fails to create) if the shader for example asks for UV coordinates and the model does not have them. For this particular case, see the NOUV define in LitSolid shader, which is defined in the NoTexture family of techniques to prevent the attempted reading of UV coords.
- Nearest texture filtering with anisotropy is not supported properly on Direct3D11. Depending on the GPU, it may also fail on Direct3D9.
- Alpha-to-coverage is not supported on Direct3D9.
- Bool and int shader uniforms are not supported on Direct3D9.
OpenGL ES 2.0 has further limitations:
- Of the DXT formats, only DXT1 compressed textures will be uploaded as compressed, and only if the EXT_texture_compression_dxt1 extension is present. Other DXT formats will be uploaded as uncompressed RGBA. ETC1 (Android) and PVRTC (iOS/tvOS) compressed textures are supported through the .ktx and .pvr file formats.
- %Texture formats such as 16-bit and 32-bit floating point are not available. Corresponding integer 8-bit formats will be returned instead.
- %Light pre-pass and deferred rendering are not supported due to missing multiple rendertarget support, and limited rendertarget formats.
- Wireframe and point fill modes are not supported.
- Due to texture unit limit (usually 8), point light shadow maps are not supported.
- To reduce fillrate, the stencil buffer is not reserved and the stencil test is not available. As a consequence, the light stencil masking optimization is not used.
- For improved performance, shadow mapping quality is reduced: there is no smooth PCF filtering and directional lights do not support shadow cascades. Consider also using the simple shadow quality (1 sample) to avoid dependent texture reads in the pixel shader, which have an especially high performance cost on iOS/tvOS hardware.
- Custom clip planes are not currently supported.
- 3D and 2D array textures are not currently supported.
- Multisampled texture rendertargets are not supported.
- Line antialiasing is not supported.
- WebGL appears to not support rendertarget mipmap regeneration, so mipmaps for rendertargets are disabled on the Web platform for now.
\page VertexBuffers Vertex buffers
%Geometry data is defined by VertexBuffer objects, which hold a number of vertices of a certain vertex format. For rendering, the data is uploaded to the GPU, but optionally a shadow copy of
the vertex data can exist in CPU memory, see \ref VertexBuffer::SetShadowed "SetShadowed()" to allow e.g. raycasts into the geometry without having to lock and read GPU memory.
The vertex format can be defined in two ways by two overloads of \ref VertexBuffer::SetSize "SetSize()":
1) With a bitmask representing hardcoded vertex element semantics and datatypes. Each of the following elements may or may not be present, but the order or datatypes may not change. The order is defined by the LegacyVertexElement enum in GraphicsDefs.h, while bitmask defines exist as MASK_POSITION, MASK_NORMAL etc.
- Position (%Vector3)
- Normal (%Vector3)
- %Color (unsigned char[4], normalized)
- Texcoord1 (%Vector2)
- Texcoord2 (%Vector2)
- Cubetexcoord1 (%Vector3)
- Cubetexcoord2 (%Vector3)
- Tangent (V%ector4)
- Blendweights (float[4])
- Blendindices (unsigned char[4])
- Instancematrix1-3 (%Vector4)
- %Object index (int, not supported on D3D9)
Note that the texcoord numbers are misleading as the actual texcoord inputs in shaders are zero-based. Instancematrix1-3 are reserved to be used by the engine for instancing and map to shader texcoord inputs 4-6.
2) By defining VertexElement structures, which tell the data type, semantic, and zero-based semantic index (for e.g. multiple texcoords), and whether the data is per-vertex or per-instance data.
This allows to freely define the order and meaning of the elements. However for 3D objects, the first element should always be "Position" and use the Vector3 type to ensure e.g. raycasts and occlusion rendering work properly.
The third parameter of \ref VertexBuffer::SetSize "SetSize()" is whether to create the buffer as static or dynamic. This is a hint to the underlying graphics API how to allocate the buffer data. Dynamic will suit frequent (every frame) modification better, while static has likely better overall performance for world geometry rendering.
After the size and format are defined, the vertex data can be set either by calling \ref VertexBuffer::SetData "SetData()" / \ref VertexBuffer::SetDataRange "SetDataRange()" or locking the vertex buffer for access, writing the data to the memory space returned from the lock, then unlocking when done.
\section VertexBuffers_MultipleBuffers Multiple vertex buffers
Multiple vertex buffers can be set to the Graphics subsystem at once, or defined into a drawable's Geometry definition for rendering.
In case the buffers both contain the same semantic, for example position, a higher index buffer overrides a lower buffer index. This is used by the AnimatedModel component to apply vertex morphs: it creates a separate clone vertex buffer which overrides the original model's position, normal and tangent data, and assigns it on index 1 while index 0 is the original model's vertex buffer.
A vertex buffer should either only contain per-vertex data, or per-instance data. Instancing in the high-level rendering (Renderer & View classes) works by momentarily appending the instance vertex buffer to the geometry being rendered in an instanced fashion.
\section VertexBuffers_IndexBuffers Index buffers
A vertex buffer is often accompanied by an index buffer (IndexBuffer class) to allow indexed rendering which avoids repeating the same vertices over and over. Its API is similar to vertex buffers, but an index buffer only needs to define the number of indices, whether the indices are 16- or 32-bit (largeIndices flag) and whether the buffer is dynamic.
\page Materials Materials
Material and Technique resources define how to render 3D scene geometry. On the disk, they are XML or JSON data. Default and example materials exist in the bin/CoreData/Materials & bin/Data/Materials subdirectories, and techniques exist in the bin/CoreData/Techniques subdirectory.
A material defines the textures, shader parameters and culling & fill mode to use, and refers to one or several techniques. A technique defines the actual rendering passes, the shaders to use in each, and all other rendering states such as depth test, depth write, and blending.
A material definition looks like this:
\code
<material>
<technique name="TechniqueName" quality="q" loddistance="d" />
<texture unit="diffuse|normal|specular|emissive|environment" name="TextureName" />
<texture ... />
<shader vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" />
<parameter name="name" value="x y z w" />
<parameter ... />
<cull value="cw|ccw|none" />
<shadowcull value="cw|ccw|none" />
<fill value="solid|wireframe|point" />
<depthbias constant="x" slopescaled="y" />
<alphatocoverage enable="true|false" />
<lineantialias enable="true|false" />
<renderorder value="x" />
<occlusion enable="true|false" />
</material>
\endcode
Several techniques can be defined for different quality levels and LOD distances. %Technique quality levels are specified from 0 (low) to 2 (high). When rendering, the highest available technique that does not exceed the Renderer's material quality setting will be chosen, see \ref Renderer::SetMaterialQuality "SetMaterialQuality()".
The techniques for different LOD levels and quality settings must appear in a specific order:
- Most distant & highest quality
- ...
- Most distant & lowest quality
- Second most distant & highest quality
- ...
%Material shader parameters can be floats or vectors up to 4 components, or matrices.
Default culling mode is counterclockwise. The shadowcull element specifies the culling mode to use in the shadow pass. Note that material's depth bias settings do not apply in the shadow pass; during shadow rendering the light's depth bias is used instead.
Render order is a 8-bit unsigned value that can be used to affect rendering order within a pass, overriding state or distance sorting. The default value is 128; smaller values will render earlier, and larger values later. One example use of render order is to ensure that materials which use discard in pixel shader (ALPHAMASK define) are rendered after full opaques to ensure the hardware depth buffer will behave optimally; in this case the render order should be increased. Read below for caveats regarding it.
Occlusion flag allows to disable software occlusion rendering per material, for example if parts of a model are transparent. By default occlusion is enabled.
Materials can optionally set shader compilation defines (vsdefines & psdefines). In this case they will be added to the techniques' own compilation defines, and the techniques are cloned as necessary
to ensure uniqueness.
Enabling alpha-to-coverage on the material enables it on all passes. Alternatively it can be enabled per-pass in the technique for fine-grained control.
\section Materials_Textures Material textures
Diffuse maps specify the surface color in the RGB channels. Optionally they can use the alpha channel for blending and alpha testing. They should preferably be compressed to DXT1 (no alpha or 1-bit alpha) or DXT5 (smooth alpha) format.
Normal maps encode the tangent-space surface normal for normal mapping. There are two options for storing normals, which require choosing the correct material technique, as the pixel shader is different in each case:
- Store as RGB. In this case use the DiffNormal techniques. This is the default used by AssetImporter, to ensure no conversion of normal textures needs to happen.
- Store as xGxR, ie. Y-component in the green channel, and X-component in the alpha. Z will be reconstructed in the pixel shader. This encoding lends itself well to DXT5 compression. You need to use the pixel shader define PACKEDNORMAL in your materials; refer to the Stone example materials. To convert normal maps to this format, you can use AMD's The Compressonator utility, see https://developer.amd.com/Resources/archive/ArchivedTools/gpu/compressonator/Pages/default.aspx.
Make sure the normal map is oriented correctly: an even surface should have the color value R 0.5 G 0.5 B 1.0.
Models using a normal-mapped material need to have tangent vectors in their vertex data; the easiest way to ensure this is to use the switch -t (generate tangents) when using either AssetImporter or OgreImporter to import models to Urho3D format. If there are no tangents, the light attenuation on the normal-mapped material will behave in a completely erratic fashion.
Specular maps encode the specular surface color as RGB. Note that deferred rendering is only able to use monochromatic specular intensity from the G channel, while forward and light pre-pass rendering use fully colored specular. DXT1 format should suit these textures well.
Textures can have an accompanying XML file which specifies load-time parameters, such as addressing, mipmapping, and number of mip levels to skip on each quality level:
\code
<texture>
<address coord="u|v|w" mode="wrap|mirror|clamp|border" />
<border color="r g b a" />
<filter mode="nearest|bilinear|trilinear|anisotropic|nearestanisotropic|default" anisotropy="x" />
<mipmap enable="false|true" />
<quality low="x" medium="y" high="z" />
<srgb enable="false|true" />
</texture>
\endcode
The sRGB flag controls both whether the texture should be sampled with sRGB to linear conversion, and if used as a rendertarget, pixels should be converted back to sRGB when writing to it. To control whether the backbuffer should use sRGB conversion on write, call \ref Graphics::SetSRGB "SetSRGB()" on the Graphics subsystem.
Anisotropy level can be optionally specified. If omitted (or if the value 0 is specified), the default from the Renderer class will be used.
\section Materials_CubeMapTextures Cube map textures
Using cube map textures requires an XML file to define the cube map face images, or a single image with layout. In this case the XML file *is* the texture resource name in material scripts or in LoadResource() calls.
Individual face images are defined in the XML like this: (see bin/Data/Textures/Skybox.xml for an example)
\code
<cubemap>
<face name="PositiveX_ImageName" />
<face name="NegativeX_ImageName" />
<face name="PositiveY_ImageName" />
<face name="NegativeY_ImageName" />
<face name="PositiveZ_ImageName" />
<face name="NegativeZ_ImageName" />
</cubemap>
\endcode
Using a single image texture and a layout is used like this:
\code
<cubemap>
<image name="ImageName" layout="horizontal|horizontalnvidia|horizontalcross|verticalcross|blender" />
</cubemap>
\endcode
For the layout definitions, see http://www.cgtextures.com/content.php?action=tutorial&name=cubemaps and https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Build_a_skybox
\section Materials_3DTextures 3D textures
3D textures likewise require an XML file to describe how they should be loaded. The XML should contain either a "volume" or "colorlut" element, which defines the image to load with its "name" attribute. The "volume" mode requires that the image is in a format that supports 3D (volume) textures directly, for example DDS. The "colorlut" mode allows any format, and instead spreads out the Z slices of the volume image horizontally in a 2D image from the left to the right: for example LUTIdentity.png has 16 16x16 slices for a total image size 256x16.
\code
<texture3d>
<colorlut name="LUTIdentity.png" />
</texture3d>
\endcode
Using a 3D texture for color correction postprocess (see bin/Data/PostProcess/ColorCorrection.xml) requires the 3D texture to be assigned to the "volume" texture unit, so that the effect knows to load the texture as the correct type. The lookup for the corrected color happens by using the original color's red channel as the X coordinate, the green channel as the Y coordinate, and blue as the Z. Therefore the identity LUT's slices (which shouldn't transform the color at all) grow red from left to right and green from top to bottom, and finally the slices themselves turn blue from left to right.
\section Materials_2DArrayTextures 2D array textures
2D array textures (Texture2DArray class) are available on OpenGL and Direct3D 11. They are also defined via an XML file defining the images to use on each array layer:
\code
<texturearray>
<layer name="Layer1_ImageName" />
<layer name="Layer2_ImageName" />
<layer name="Layer3_ImageName" />
</texturearray>
\endcode
\section Materials_Techniques Techniques and passes
A technique definition looks like this:
\code
<technique vs="VertexShaderName" ps="PixelShaderName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" desktop="false|true" >
<pass name="base|litbase|light|alpha|litalpha|postopaque|refract|postalpha|prepass|material|deferred|depth|shadow" desktop="false|true" >
vs="VertexShaderName" ps="PixelShaderName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4"
vsexcludes="EXCLUDE1 EXCLUDE2" psexcludes="EXCLUDE3 EXCLUDE4"
lighting="unlit|pervertex|perpixel"
blend="replace|add|multiply|alpha|addalpha|premulalpha|invdestalpha|subtract|subtractalpha"
[cull="cw|ccw|none"]
depthtest="always|equal|less|lessequal|greater|greaterequal"
depthwrite="true|false"
alphatocoverage="true|false" />
<pass ... />
<pass ... />
</technique>
\endcode
The "desktop" attribute in either technique or pass allows to specify it requires desktop graphics hardware (exclude mobile devices.) Omitting it is the same as specifying false.
A pass should normally not define culling mode, but it can optionally specify it to override the value in the material.
Shaders are referred to by giving the name of a shader without path and file extension. For example "Basic" or "LitSolid". The engine will add the correct path and file extension (Shaders/HLSL/LitSolid.hlsl for Direct3D, and Shaders/GLSL/LitSolid.glsl for OpenGL) automatically. The same shader source file contains both the vertex and pixel shader. In addition, compilation defines can be specified, which are passed to the shader compiler. For example the define "DIFFMAP" typically enables diffuse mapping in the pixel shader.
Shaders and their compilation defines can be specified on both the technique and pass level. If a pass does not override the default shaders specified on the technique level, it still can specify additional compilation defines to be used. However, if a pass overrides the shaders, then the technique-level defines are not used.
As a material can set further shader defines, which would be applied to all passes, the "vsexcludes" and "psexcludes" mechanism allows per-pass control to prevent them from being included. This is intended for eliminating the compilation of unnecessary shader variations, for example a shadow shader attempting to read a normal map.
The technique definition does not need to enumerate shaders used for different geometry types (non-skinned, skinned, instanced, billboard) and different per-vertex and per-pixel light combinations. Instead the engine will add certain hardcoded compilation defines for these. See \ref Shaders "Shaders" for details.
The purposes of the different passes are:
- base: Renders ambient light, per-vertex lights and fog for an opaque object.
- litbase: Renders the first per-pixel light, ambient light and fog for an opaque object. This is an optional pass for optimization.
- light: Renders one per-pixel light's contribution additively for an opaque object.
- alpha: Renders ambient light, per-vertex lights and fog for a transparent object.
- litalpha: Renders one per-pixel light's contribution additively for a transparent object
- postopaque: Custom rendering pass after opaque geometry. Can be used to render the skybox.
- refract: Custom rendering pass after postopaque pass. Can sample the viewport texture from the environment texture unit to render refractive objects.
- postalpha: Custom rendering pass after transparent geometry.
- prepass: %Light pre-pass only - renders normals, specular power and depth to the G-buffer.
- material: %Light pre-pass only - renders opaque geometry final color by combining ambient light, per-vertex lights and per-pixel light accumulation.
- deferred: Deferred rendering only - renders ambient light and per-vertex lights to the output rendertarget, and diffuse albedo, normals, specular intensity + power and depth to the G-buffer.
- depth: Renders linear depth to a rendertarget for post-processing effects.
- shadow: Renders to a hardware shadow map (depth only) for shadow map generation.
More custom passes can be defined and referred to in the \ref RenderPaths "render path definition". For the built-in passes listed above, the lighting shader permutations to load (unlit, per-vertex or per-pixel) are recognized automatically, but for custom passes they need to be explicitly specified. The default is unlit.
The optional "litbase" pass reduces draw call count by combining ambient lighting with the first per-pixel light affecting an object. However, it has intentional limitations to not require too many shader permutations: there must be no vertex lights affecting the object, and the ambient lighting can not have a gradient. In case of excessive overdraw, it is possibly better not to define it, but instead allow the base pass (which is computationally very lightweight) to run first, initializing the Z buffer for later passes.
The refract pass requires pingponging the scene rendertarget to a texture, but this will not be performed if there is no refractive geometry to render, so there is no unnecessary cost to it.
\section Materials_RenderOrder Render order caveats
Render order works well when you know a material is going to render only a single pass, for example a deferred G-buffer pass. However when forward rendering and per-pixel lights are used, rendering of typical lit geometry can be split over the "base", "litbase" and "light" passes. If you use
render order combined with depth test manipulation to force some object to render in front of others, the pass order may not be obvious. "Base"
pass would be rendered first, but objects often don't use it, rather they render later as part of the forward light loop using the "litbase" pass. This may throw off the depth test manipulation and cause a different result than expected.
An easy fix is to simply disable the "litbase" optimization pass altogether, though this costs performance. This can be done globally from the forward renderpath (Bin/CoreData/RenderPaths/Forward.xml) by modifying the forwardlights command to read:
\code
<command type="forwardlights" pass="light" uselitbase="false" />
\endcode
\page Shaders Shaders
Urho3D uses an ubershader-like approach: permutations of each shader will be built with different compilation defines, to produce eg. static or skinned, deferred or forward or shadowed/unshadowed rendering.
The building of these permutations happens on demand: technique and renderpath definition files both refer to shaders and the compilation defines to use with them. In addition the engine will add inbuilt defines related to geometry type and lighting. It is not generally possible to enumerate beforehand all the possible permutations that can be built out of a single shader.
On Direct3D compiled shader bytecode is saved to disk in a "Cache" subdirectory next to the shader source code, so that the possibly time-consuming compile can be skipped on the next time the shader permutation is needed. On OpenGL such mechanism is not available.
\section Shaders_InbuiltDefines Inbuilt compilation defines
When rendering scene objects, the engine expects certain shader permutations to exist for different geometry types and lighting conditions. These correspond to the following compilation defines:
Vertex shader:
- NUMVERTEXLIGHTS=1,2,3 or 4: number of vertex lights influencing the object
- DIRLIGHT, SPOTLIGHT, POINTLIGHT: a per-pixel forward light is being used. Accompanied by the define PERPIXEL
- SHADOW: the per-pixel forward light has shadowing
- NORMALOFFSET: shadow receiver UV coordinates should be adjusted according to normals
- SKINNED, INSTANCED, BILLBOARD: choosing the geometry type
Pixel shader:
- DIRLIGHT, SPOTLIGHT, POINTLIGHT: a per-pixel forward light is being used. Accompanied by the define PERPIXEL
- CUBEMASK: the point light has a cube map mask
- SPEC: the per-pixel forward light has specular calculations
- SHADOW: the per-pixel forward light has shadowing
- SIMPLE_SHADOW, PCF_SHADOW, VSM_SHADOW: the shadow sampling quality that is to be used
- SHADOWCMP: use manual shadow depth compare, Direct3D9 only for DF16 & DF24 shadow map formats
- HEIGHTFOG: object's zone has height fog mode
\section Shaders_InbuiltUniforms Inbuilt shader uniforms
When objects or quad passes are being rendered, various engine inbuilt uniforms are set to assist with the rendering. Below is a partial list of the uniforms listed as HLSL data types. Look at the file Uniforms.glsl for the corresponding GLSL uniforms.
Vertex shader uniforms:
- float3 cAmbientStartColor: the start color value for a zone's ambient gradient
- float3 cAmbientEndColor: the end color value for a zone's ambient gradient
- float3 cCameraPos: camera's world position
- float cNearClip: camera's near clip distance
- float cFarClip: camera's far clip distance
- float cDeltaTime: the timestep of the current frame
- float4 cDepthMode: parameters for calculating a linear depth value between 0-1 to pass to the pixel shader in an interpolator.
- float cElapsedTime: scene's elapsed time value. Can be used to implement animating materials
- float4x3 cModel: the world transform matrix of the object being rendered
- float4x3 cView: the camera's view matrix
- float4x3 cViewInv: the inverse of the camera's view matrix (camera world transform)
- float4x4 cViewProj: the camera's concatenated view and projection matrices
- float4x3 cZone: zone's transform matrix; used for ambient gradient calculations
Pixel shader uniforms:
- float3 cAmbientColor: ambient color for a zone with no ambient gradient
- float3 cCameraPosPS: camera's world position
- float4 cDepthReconstruct: parameters for reconstructing a linear depth value between 0-1 from a nonlinear hardware depth texture sample.
- float cDeltaTimePS: the timestep of the current frame
- float cElapsedTimePS: scene's elapsed time value
- float3 cFogColor: the zone's fog color
- float4 cFogParams: fog calculation parameters (see Batch.cpp and Fog.hlsl for the exact meaning)
- float cNearClipPS: camera's near clip distance
- float cFarClipPS: camera's far clip distance
\section Shaders_Writing Writing shaders
Shaders must be written separately for HLSL (Direct3D) and GLSL (OpenGL). The built-in shaders try to implement the same functionality on both shader languages as closely as possible.
To get started with writing your own shaders, start with studying the most basic examples possible: the Basic, Shadow & Unlit shaders. Note the shader include files which bring common functionality, for example Uniforms.hlsl, Samplers.hlsl & Transform.hlsl for HLSL shaders.
Transforming the vertex (which hides the actual skinning, instancing or billboarding process) is a slight hack which uses a combination of macros and functions: it is safest to copy the following piece of code verbatim:
For HLSL:
\code
float4x3 modelMatrix = iModelMatrix;
float3 worldPos = GetWorldPos(modelMatrix);
oPos = GetClipPos(worldPos);
\endcode
For GLSL:
\code
mat4 modelMatrix = iModelMatrix;
vec3 worldPos = GetWorldPos(modelMatrix);
gl_Position = GetClipPos(worldPos);
\endcode
On both Direct3D and OpenGL the vertex and pixel shaders are written into the same file, and the entrypoint functions must be called VS() and PS(). In OpenGL mode one of these is transformed behind the scenes to the main() function required by GLSL. When compiling a vertex shader, the compilation define "COMPILEVS" is always present, and likewise "COMPILEPS" when compiling a pixel shader. These are heavily used in the shader include files to prevent constructs that are illegal for the "wrong" type of shader, and to reduce compilation time.
Vertex shader inputs need to be matched to vertex element semantics to render properly.. In HLSL semantics for inputs are defined in each shader with uppercase words (POSITION, NORMAL, TEXCOORD0 etc.) while in GLSL the default attributes are defined in Transform.glsl and are matched to the vertex element semantics with a case-insensitive string "contains" operation, with an optional number postfix to define the semantic index. For example iTexCoord is the first (semantic index 0) texture coordinate, and iTexCoord1 is the second (semantic index 1).
Uniforms must be prefixed in a certain way so that the engine understands them:
- c for uniform constants, for example cMatDiffColor. The c is stripped when referred to inside the engine, so it would be called "MatDiffColor" in eg. \ref Material::SetShaderParameter "SetShaderParameter()"
- s for texture samplers, for example sDiffMap.
In GLSL shaders it is important that the samplers are assigned to the correct texture units. If you are using sampler names that are not predefined in the engine like sDiffMap, just make sure there is a number somewhere in the sampler's name and it will be interpreted as the texture unit. For example the terrain shader uses texture units 0-3 in the following way:
\code
uniform sampler2D sWeightMap0;
uniform sampler2D sDetailMap1;
uniform sampler2D sDetailMap2;
uniform sampler2D sDetailMap3;
\endcode
The maximum number of bones supported for hardware skinning depends on the graphics API and is relayed to the shader code in the MAXBONES compilation define. Typically the maximum is 64, but is reduced to 32 on the Raspberry PI, and increased to 128 on Direct3D 11 & OpenGL 3. See also \ref Graphics::GetMaxBones "GetMaxBones()".
\section Shaders_API API differences
Direct3D9 and Direct3D11 share the same HLSL shader code, and likewise OpenGL 2, OpenGL 3, OpenGL ES 2 and WebGL share the same GLSL code. Macros and some conditional code are used to hide the API differences where possible.
When HLSL shaders are compiled for Direct3D11, the define D3D11 is present, and the following details need to be observed:
- Uniforms are organized into constant buffers. See the file Uniforms.hlsl for the built-in uniforms. See TerrainBlend.hlsl for an example of defining your own uniforms into the "custom" constant buffer slot.
- Both textures and samplers are defined for each texture unit. The macros in Samplers.hlsl (Sample2D, SampleCube etc.) can be used to write code that works on both APIs. These take the texture unit name without the 's' prefix.
- Vertex shader output position and pixel shader output color need to use the SV_POSITION and SV_TARGET semantics. The macros OUTPOSITION and OUTCOLOR0-3 can be used to select the correct semantic on both APIs. In the vertex shader, the output position should be specified last, as otherwise other output semantics may not function correctly. In general, it is necessary that the output semantics defined by the vertex shader are defined as pixel shader inputs in the same order. Otherwise the Direct3D shader compiler may assign the semantics wrong.
- On Direct3D11 the clip plane coordinate must be calculated manually. This is indicated by the CLIPPLANE compilation define, which is added automatically by the Graphics class. See for example the LitSolid.hlsl shader.
- Direct3D11 does not support luminance and luminance-alpha texture formats, but rather uses the R and RG channels. Therefore be prepared to perform swizzling in the texture reads as appropriate.
- Direct3D11 will fail to render if the vertex shader refers to vertex elements that don't exist in the vertex buffers.
For OpenGL, the define GL3 is present when GLSL shaders are being compiled for OpenGL 3+, the define GL_ES is present for OpenGL ES 2, WEBGL define is present for WebGL and RPI define is present for the Raspberry Pi. Observe the following differences:
- On OpenGL 3 GLSL version 150 will be used if the shader source code does not define the version. The texture sampling functions are different but are worked around with defines in the file Samplers.glsl. Likewise the file Transform.glsl contains macros to hide the differences in declaring vertex attributes, interpolators and fragment outputs.
- On OpenGL 3 luminance, alpha and luminance-alpha texture formats are deprecated, and are replaced with R and RG formats. Therefore be prepared to perform swizzling in the texture reads as appropriate.
- On OpenGL ES 2 precision qualifiers need to be used.
\section Shaders_Precaching Shader precaching
The shader variations that are potentially used by a material technique in different lighting conditions and rendering passes are enumerated at material load time, but because of their large amount, they are not actually compiled or loaded from bytecode before being used in rendering. Especially on OpenGL the compiling of shaders just before rendering can cause hitches in the framerate. To avoid this, used shader combinations can be dumped out to an XML file, then preloaded. See \ref Graphics::BeginDumpShaders "BeginDumpShaders()", \ref Graphics::EndDumpShaders "EndDumpShaders()" and \ref Graphics::PrecacheShaders "PrecacheShaders()" in the Graphics subsystem. The command line parameters -ds <file> can be used to instruct the Engine to begin dumping shaders automatically on startup.
Note that the used shader variations will vary with graphics settings, for example shadow quality simple/PCF/VSM or instancing on/off.
\page RenderPaths Render path
%Scene rendering and any post-processing on a Viewport is defined by its RenderPath object, which can either be read from an XML file or be created programmatically.
The render path consists of rendertarget definitions and commands. The commands are executed in order to yield the rendering result. Each command outputs either to the destination rendertarget & viewport (default if output definition is omitted), or one of the named rendertargets. MRT output is also possible. If the rendertarget is a cube map,
the face to render to (0-5) can also be specified.
A rendertarget's size can be either absolute or multiply or divide the destination viewport size. The multiplier or divisor does not need to be an integer number. Furthermore, a rendertarget can be declared "persistent" so that it will not be mixed with other rendertargets of the same size and format, and its contents can be assumed to be available also on subsequent frames.
Note that if you already have created a named rendertarget texture in code and have stored it into the resource cache by using \ref ResourceCache::AddManualResource "AddManualResource()" you can use it directly as an output (by referring to its name) without requiring a rendertarget definition for it.
The available commands are:
- clear: Clear any of color, depth and stencil. Color clear can optionally use the fog color from the Zone visible at the far clip distance.
- scenepass: Render scene objects whose \ref Materials "material technique" contains the specified pass. Will either be front-to-back ordered with state sorting, or back-to-front ordered with no state sorting. For deferred rendering, object lightmasks can be optionally marked to the stencil buffer. Vertex lights can optionally be handled during a pass, if it has the necessary shader combinations. Textures global to the pass can be bound to free texture units; these can either be the viewport, a named rendertarget, or a texture resource identified with its pathname.
- quad: Render a viewport-sized quad using the specified shaders. The blend mode (default=replace) can be optionally specified.
- forwardlights: Render per-pixel forward lighting for opaque objects with the specified pass name. Shadow maps are also rendered as necessary.
- lightvolumes: Render deferred light volumes using the specified shaders. G-buffer textures can be bound as necessary.
- renderui: Render the UI into the output rendertarget. Using this will cause the default %UI render to the backbuffer to be skipped.
- sendevent: Send an event with a specified string parameter ("event name"). This can be used to call custom code,typically custom low-level rendering, in the middle of the renderpath execution.
Scenepass, quad, forwardlights and lightvolumes commands all allow command-global shader compilation defines, shader parameters and textures to be defined. For example in deferred rendering, the lightvolumes command would bind the G-buffer textures to be able to calculate the lighting. Note that when binding command-global textures, these are (for optimization) bound only once in the beginning of the command. If the texture binding is overwritten by an object's material, it is "lost" until the end of the command. Therefore the command-global textures should be in units that are not used by materials.
Note that it's legal for only one forwardlights or one lightvolumes command to exist in the renderpath.
A render path can be loaded from a main XML file by calling \ref RenderPath::Load "Load()", after which other XML files (for example one for each post-processing effect) can be appended to it by calling \ref RenderPath::Append "Append()". Rendertargets and commands can be enabled or disabled by calling \ref RenderPath::SetEnabled "SetEnabled()" to switch eg. a post-processing effect on or off. To aid in this, both can be identified by tag names, for example the bloom effect uses the tag "Bloom" for all of its rendertargets and commands.
It is legal to both write to the destination viewport and sample from it during the same command: pingpong copies of its contents will be made automatically. If the viewport has hardware multisampling on, the multisampled backbuffer will be resolved to a texture before sampling it.
The render path XML definition looks like this:
\code
<renderpath>
<rendertarget name="RTName" tag="TagName" enabled="true|false" cubemap="true|false" size="x y"|sizedivisor="x y"|sizemultiplier="x y"
format="rgb|rgba|l|a|r32f|rgba16|rgba16f|rgba32f|rg16|rg16f|rg32f|lineardepth|readabledepth|d24s8" filter="true|false" srgb="true|false" persistent="true|false"
multisample="x" autoresolve="true|false" />
<command type="clear" tag="TagName" enabled="true|false" color="r g b a|fog" depth="x" stencil="y" output="viewport|RTName" face="0|1|2|3|4|5" depthstencil="DSName" />
<command type="scenepass" pass="PassName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" sort="fronttoback|backtofront" marktostencil="true|false" vertexlights="true|false" metadata="base|alpha|gbuffer" depthstencil="DSName">
<output index="0" name="RTName1" face="0|1|2|3|4|5" />
<output index="1" name="RTName2" />
<output index="2" name="RTName3" />
<texture unit="unit" name="viewport|RTName|TextureName" />
<parameter name="ParameterName" value="x y z w" />
</command>
<command type="quad" vs="VertexShaderName" ps="PixelShaderName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" blend="replace|add|multiply|alpha|addalpha|premulalpha|invdestalpha|subtract|subtractalpha" output="viewport|RTName" depthstencil="DSName" />
<texture unit="unit" name="viewport|RTName|TextureName" />
<parameter name="ParameterName" value="x y z w" />
</command>
<command type="forwardlights" pass="PassName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" uselitbase="true|false" output="viewport|RTName" depthstencil="DSName">
<texture unit="unit" name="viewport|RTName|TextureName" />
<parameter name="ParameterName" value="x y z w" />
</command
<command type="lightvolumes" vs="VertexShaderName" ps="PixelShaderName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" output="viewport|RTName" depthstencil="DSName" />
<texture unit="unit" name="viewport|RTName|TextureName" />
<parameter name="ParameterName" value="x y z w" />
</command>
<command type="renderui" output="viewport|RTName" depthstencil="DSName" />
<command type="sendevent" name="EventName" />
</renderpath>
\endcode
For examples of renderpath definitions, see the default forward, deferred and light pre-pass renderpaths in the bin/CoreData/RenderPaths directory, and the postprocess renderpath definitions in the bin/Data/PostProcess directory.
\section RenderPaths_Depth Depth-stencil handling and reading scene depth
Normally needed depth-stencil surfaces are automatically allocated when the render path is executed.
The special "lineardepth" (synonym "depth") format is intended for storing scene depth in deferred rendering. It is not an actual hardware depth-stencil texture, but a 32-bit single channel (R) float rendertarget. (On OpenGL2 it's RGBA instead, due to the limitation of all color buffers having to be the same format. The shader include file Samplers.glsl in bin/CoreData/Shaders/GLSL provides functions to encode and decode linear depth to RGB.)
Writing depth manually to a rendertarget, while using a non-readable depth-stencil surface ensures best compatibility and prevents any conflicts from using both depth test and manual depth sampling at the same time.
There is also a possibility to define a readable hardware depth texture, and instruct the render path to use it instead. Availability for this must first be checked with the function \ref Graphics::GetReadableDepthSupport "GetReadableDepthSupport()". On Direct3D9 this will use the INTZ "hack" format. To define a readable depth-stencil texture, use the format "readabledepth" (synonym "hwdepth") and set it as the depth-stencil by using the "depthstencil" attribute in render path commands. Note that you must set it in every command where you want to use it, otherwise an automatically allocated depth-stencil will be used. Note also that the existence of a stencil channel is not guaranteed, so stencil masking optimizations for lights normally used by the Renderer & View classes will be disabled.
In the special case of a depth-only rendering pass you can set the readable depth texture directly as the "output" and don't need to specify the "depthstencil" attribute at all.
After the readable depth texture has been filled, it can be bound to a texture unit in any subsequent commands. Pixel shaders should use the ReconstructDepth() helper function to reconstruct a linear depth value between 0-1 from the nonlinear hardware depth value. When the readable depth texture is bound for sampling, depth write is automatically disabled, as both modifying and sampling the depth would be undefined.
An example render path for readable hardware depth exists in bin/CoreData/RenderPaths/ForwardHWDepth.xml:
\code
<renderpath>
<rendertarget name="depth" sizedivisor="1 1" format="readabledepth" />
<command type="clear" depth="1.0" output="depth" />
<command type="scenepass" pass="shadow" output="depth" />
<command type="clear" color="fog" depthstencil="depth" />
<command type="scenepass" pass="base" vertexlights="true" metadata="base" depthstencil="depth" />
<command type="forwardlights" pass="light" depthstencil="depth" />
<command type="scenepass" pass="postopaque" depthstencil="depth" />
<command type="scenepass" pass="refract" depthstencil="depth">
<texture unit="environment" name="viewport" />
</command>
<command type="scenepass" pass="alpha" vertexlights="true" sort="backtofront" metadata="alpha" depthstencil="depth" />
<command type="scenepass" pass="postalpha" sort="backtofront" depthstencil="depth" />
</renderpath>
\endcode
The render path starts by allocating a readable depth-stencil texture the same size as the destination viewport, clearing its depth, then rendering a depth-only pass to it. Next the destination color rendertarget is cleared normally, while the readable depth texture is used as the depth-stencil for that and all subsequent commands. Any command after the depth render pass could now bind the depth texture to an unit for sampling, for example for smooth particle or SSAO effects.
The ForwardDepth.xml render path does the same, but using a linear depth rendertarget instead of a hardware depth texture. The advantage is better compatibility (guaranteed to work without checking \ref Graphics::GetReadableDepthSupport "GetReadableDepthSupport()") but it has worse performance as it will perform an additional full scene rendering pass.
\section RenderPaths_SoftParticles Soft particles rendering
Soft particles rendering is a practical example of utilizing scene depth reading. The default renderpaths that expose a readable depth bind the depth texture in the alpha pass. This is utilized by the UnlitParticle & LitParticle shaders when the SOFTPARTICLES shader compilation define is included. The particle techniques containing "Soft" in their name in Bin/CoreData/Techniques use this define. Note that they expect a readable depth and will not work with the plain forward renderpath!
Soft particles can be implemented in two contrasting approaches: "shrinking" and "expanding". In the shrinking approach (default) depth test can be left on and the soft particle shader starts to reduce particle opacity when the particle geometry approaches solid geometry. In the expanding approach the particles should have depth test off, and the shader instead starts to reduce the particle opacity when the particle geometry overshoots the solid geometry.
For the expanding mode, see the "SoftExpand" family of particle techniques. Their downside is that performance can be lower due to not being able to use hardware depth test.
Finally note the SoftParticleFadeScale shader parameter which is needed to control the distance over which the fade will take effect. This is defined in example materials using soft particles (SmokeSoft.xml & LitSmokeSoft.xml)
\section RenderPaths_ForwardLighting Forward lighting special considerations
Otherwise fully customized scene render passes can be specified, but there are a few things to remember related to forward lighting:
- The opaque base pass must be tagged with metadata "base". When forward lighting logic does the lit base pass optimization, it will search for a pass with the word "lit" prepended, ie. if your custom opaque base pass is
called "custombase", the corresponding lit base pass would be "litcustombase".
- The transparent base pass must be tagged with metadata "alpha". For lit transparent objects, the forward lighting logic will look for a pass with the word "lit" prepended, ie. if the custom alpha base pass is called "customalpha", the corresponding lit pass is "litcustomalpha". The lit drawcalls will be interleaved with the transparent base pass, and the scenepass command should have back-to-front sorting enabled.
- If forward and deferred lighting are mixed, the G-buffer writing pass must be tagged with metadata "gbuffer" to prevent geometry being double-lit also with forward lights.
- Remember to mark the lighting mode (per-vertex / per-pixel) into the techniques which define custom passes, as the lighting mode can be guessed automatically only for the known default passes.
- The forwardlights command can optionally disable the lit base pass optimization without having to touch the material techniques, if a separate opaque ambient-only base pass is needed. By default the optimization is enabled.
\section RenderPaths_PostProcess Post-processing effects special considerations
Post-processing effects are usually implemented by using the quad command. When using intermediate rendertargets that are of different size than the viewport rendertarget, it is often necessary in shaders to reference their (inverse) size and the half-pixel offset for Direct3D9. These shader uniforms are automatically attempted to be assigned for named rendertargets. For an example look at the bloom postprocess shaders: because there is a rendertarget called BlurH, each quad command in the renderpath will attempt to set the shader uniforms cBlurHInvSize and cBlurHOffsets (both Vector2.) Note that setting shader uniforms is case insensitive.
In OpenGL post-processing shaders it is important to distinguish between sampling a rendertarget texture and a regular texture resource, because intermediate rendertargets (such as the G-buffer) may be vertically inverted. Use the GetScreenPos() or GetQuadTexCoord() functions to get rendertarget UV coordinates from the clip coordinates; this takes flipping into account automatically. For sampling a regular texture, use GetQuadTexCoordNoFlip() function, which requires world coordinates instead of clip coordinates.
\section RenderPaths_MultiSample Multisampled rendertargets
Texture2D and TextureCube support multisampling. Programmatically, multisampling is enabled through the \ref Texture2D::SetSize "SetSize()" function when defining the dimensions and format. Multisampling can also be set in a renderpath's rendertarget definition.
The normal operation is that a multisampled rendertarget will be automatically resolved to 1-sample before being sampled as a texture. This is denoted by the \ref Texture2D::GetAutoResolve() "autoResolve" parameter, whose default value is true. On OpenGL (when supported) and Direct3D11, it's also possible to access the individual samples of a Texture2D in shader code by defining a multisampled sampler and using specialized functions (texelFetch on OpenGL, Texture2DMS.Load on Direct3D11). In this case the "autoResolve" parameter should be set to false. Note that accessing individual samples is not possible for cube textures, or when using Direct3D9.
By accessing the individual samples of multisampled G-buffer textures, a deferred MSAA renderer could be implemented. This has some performance considerations / complexities (you should avoid running the lighting calculations per sample when not on triangle edges) and is not implemented by default.
\page Lights Lights and shadows
Lights in Urho3D can be directional, point, or spot lights, either per-pixel or per-vertex. Shadow mapping is supported for all per-pixel lights.
A directional light's position has no effect, as it's assumed to be infinitely far away, only its rotation matters. It casts orthographically projected shadows. For increasing the shadow quality, cascaded shadow mapping (splitting the view into several shadow maps along the Z-axis) can be used.
Point lights are spherical in shape. When a point light casts shadows, it will be internally split into 6 spot lights with a 90 degree FOV each. This is very expensive rendering-wise, so shadow casting point lights should be used sparingly.
Spot lights have FOV & aspect ratio values like cameras to define the shape of the light cone.
Both point and spot lights in per-pixel mode use an attenuation ramp texture to determine how the intensity varies with distance. In addition they have a shape texture, 2D for spot lights, and an optional cube texture for point lights. It is important that the spot light's shape texture has black at the borders, and has mipmapping disabled, otherwise there will be "bleeding" artifacts at the edges of the light cone.
Per-vertex mode is enabled on a light by calling \ref Light::SetPerVertex "SetPerVertex()". Per-vertex lights are evaluated during each object's ambient light and fog calculations and can be substantially faster than per-pixel lights. There is currently a maximum of 4 per-vertex lights for each object; if this number is exceeded, only the brightest per-vertex lights affecting the object will be rendered.
\section Lights_LightColor Light color
A light's color and strength are controlled by three values: \ref Light::SetColor "color", \ref Light::SetSpecularIntensity "specular intensity", and \ref Light::SetBrightness "brightness multiplier".
The brightness multiplier is applied to both the color and specular intensity to yield final values used in rendering. This can be used to implement fades or flickering without affecting the original color.
A specular intensity of 0 disables specular calculations from a per-pixel light, resulting in faster GPU calculations. Per-vertex lights never use specular calculations.
Negative (subtractive) lights can be achieved by setting either the color components or the brightness multiplier to a negative value. These can be used to locally reduce the ambient light level, for example to create a dark cave. Negative per-pixel lights will not work in light pre-pass rendering mode, as it uses a light accumulation buffer with a black initial value, so there is nothing to subtract from.
Lights can alternatively enable the use of physical values, in which case the brightness multiplier is specified in lumens, and a light temperature value in Kelvin becomes available to also modulate the color (typically the color value itself would be left white in this case.) See \ref Light::SetUsePhysicalValues "SetUsePhysicalValues()" and \ref Light::SetTemperature "SetTemperature()".
\section Lights_LightCulling Light culling
When occlusion is used, a light will automatically be culled if its bounding box is fully behind an occluder. However, directional lights have an infinite bounding box, and can not be culled this way.
It is possible to limit which objects are affected by each light, by calling \ref Drawable::SetLightMask "SetLightMask()" on both the light and the objects. The lightmasks of the light and objects are ANDed to check whether the light should have effect: the light will only illuminate an object if the result is nonzero. By default objects and lights have all bits set in their lightmask, thus passing this test always.
\ref Zone "Zones" can also be used for light culling. When an object is inside a zone, its lightmask will be ANDed with the zone's lightmask before testing it against the lights' lightmasks. Using this mechanism, objects can change their accepted light set dynamically as they move through the scene.
Care must be utilized when doing light culling with lightmasks, because they easily create situations where a light's influence is cut off unnaturally. However, they can be helpful in preventing light spill into undesired areas, for example lights inside one room bleeding into another, without having to resort into shadow-casting lights.
In light pre-pass and deferred rendering, light culling happens by writing the objects' lightmasks to the stencil buffer during G-buffer rendering, and comparing the stencil buffer to the light's light mask when rendering light volumes. In this case lightmasks are limited to the low 8 bits only.
\section Lights_ShadowedLights Shadowed lights
Shadow rendering is easily the most complex aspect of using lights, and therefore a wide range of per-light parameters exists for controlling the shadows:
- BiasParameters: define constant & slope-scaled depth bias values and normal offset for preventing self-shadowing artifacts. In practice, need to be determined experimentally. Orthographic (directional) and projective (point and spot) shadows may require rather different bias values. Normal offset is an alternative shadow biasing method which is based on modifying the shadow receiver UV coordinates in the direction of the receiver geometry normal, rather than modifying the depth during shadow rendering. Yet another way of fighting self-shadowing issues is to render shadowcaster backfaces, see \ref Rendering_Materials "Materials".
- CascadeParameters: these have effect only for directional lights. They specify the far clip distance of each of the cascaded shadow map splits (maximum 4), and the fade start point relative to the maximum shadow range. Unused splits can be set to far clip 0. This structure also includes the biasAutoAdjust setting for adjusting the depth bias automatically based on cascade split distance. By default it is on at 1x strength (value 1) but could be disabled (value 0) or adjusted stronger (values larger than 1.)
- FocusParameters: these have effect for directional and spot lights, and control techniques to increase shadow map resolution. They consist of focus enable flag (allows focusing the shadow camera on the visible shadow casters & receivers), nonuniform scale enable flag (allows better resolution), automatic size reduction flag (reduces shadow map resolution when the light is far away), and quantization & minimum size parameters for the shadow camera view.
Additionally there are shadow fade distance, shadow intensity, shadow resolution, shadow near/far ratio and shadow max extrusion parameters:
- If both shadow distance and shadow fade distance are greater than zero, shadows start to fade at the shadow fade distance, and vanish completely at the shadow distance.
- Shadow intensity defines how dark the shadows are, between 0.0 (maximum darkness, the default) and 1.0 (fully lit.)
- The shadow resolution parameter scales the global shadow map size set in Renderer to determine the actual shadow map size. Maximum is 1.0 (full size) and minimum is 0.125 (one eighth size.) Choose according to the size and importance of the light; smaller shadow maps will be less performance hungry.
- The shadow near/far ratio controls shadow camera near clip distance for point & spot lights. The default ratio is 0.002, which means a light with range 100 would have its shadow camera near plane set at the distance of 0.2. Set this as high as you can for better shadow depth resolution, but note that the bias parameters will likely have to be adjusted as well.
- The shadow max extrusion distance controls how far from the view position directional light shadow cameras are positioned. The effective value will be the minimum of this parameter and the camera far clip distance. The default is 1000; increase this if you have shadow cascades to a far distance and are using tall objects, and notice missing shadows. The extrusion distance affects shadow map depth resolution and therefore the effect of shadow bias parameters.
\section Lights_ShadowGlobal Global shadow settings
The shadow map base resolution and quality (bit depth & sampling mode) are set through functions in the Renderer subsystem, see \ref Renderer::SetShadowMapSize "SetShadowMapSize()" and \ref Renderer::SetShadowQuality "SetShadowQuality()".
The shadow quality enum allows choosing also variance (VSM) shadows instead of the default hardware depth shadows. VSM shadows behave markedly differently; depth bias settings are no longer relevant, but you should make sure all your large surfaces (also ground & terrain) are marked as shadow casters, otherwise shadows cast by objects moving over them can appear unnaturally thin. For VSM shadows, see the functions \ref Renderer::SetShadowSoftness "SetShadowSoftness()" and \ref Renderer::SetVSMShadowParameters "SetVSMShadowParameters()" to control the softness (blurring) and in-shadow detection behavior. Instead of self-shadowing artifacts common with hardware depth shadows, you may encounter light bleeding when shadow casting surfaces are close in light direction to each other, which adjusting the VSM shadow parameters may help.
VSM shadow maps can also be multisampled for better quality, though this has a performance cost. See \ref Renderer::SetVSMMultiSample "SetVSMMultiSample()".
\section Lights_ShadowMapReuse Shadow map reuse
The Renderer can be configured to either reuse shadow maps, or not. To reuse is the default, use \ref Renderer::SetReuseShadowMaps "SetReuseShadowMaps()" to change.
When reuse is enabled, only one shadow texture of each shadow map size needs to be reserved, and shadow maps are rendered "on the fly" before rendering a single shadowed light's contribution onto opaque geometry. This has the downside that shadow maps are no longer available during transparent geometry rendering, so transparent objects will not receive shadows.
When reuse is disabled, all shadow maps are rendered before the actual scene rendering. Now multiple shadow textures need to be reserved based on the number of simultaneous shadow casting lights. See the function \ref Renderer::SetNumShadowMaps "SetNumShadowMaps()". If there are not enough shadow textures, they will be assigned to the closest/brightest lights, and the rest will be rendered unshadowed. Now more texture memory is needed, but the advantage is that also transparent objects can receive shadows.
\section Lights_ShadowCulling Shadow culling
Similarly to light culling with lightmasks, shadowmasks can be used to select which objects should cast shadows with respect to each light. See \ref Drawable::SetShadowMask "SetShadowMask()". A potential shadow caster's shadow mask will be ANDed with the light's lightmask to see if it should be rendered to the light's shadow map. Also, when an object is inside a zone, its shadowmask will be ANDed with the zone's shadowmask as well. By default all bits are set in the shadowmask.
For an example of shadow culling, imagine a house (which itself is a shadow caster) containing several objects inside, and a shadowed directional light shining in from the windows. In that case shadow map rendering can be avoided for objects already in shadow by clearing the respective bit from their shadowmasks.
\page SkeletalAnimation Skeletal animation
The AnimatedModel component renders GPU-skinned geometry and is capable of skeletal animation. When a model is assigned to it using \ref AnimatedModel::SetModel "SetModel()", it creates a bone node hierarchy under its scene node, and these bone nodes can be moved and rotated to animate.
There are two ways to play skeletal animations:
- Manually, by adding or removing animation states to the AnimatedModel, and advancing their time positions & weights, see \ref AnimatedModel::AddAnimationState "AddAnimationState()", \ref AnimatedModel::RemoveAnimationState "RemoveAnimationState()", \ref AnimationState::AddTime "AddTime()" and \ref AnimationState::SetWeight "SetWeight()".
- Using the AnimationController helper component: create it into the same scene node as the AnimatedModel, and use its functions, such as \ref AnimationController::Play "Play()" and \ref AnimationController::Stop "Stop()". AnimationController will advance the animations automatically during scene update. It also enables automatic network synchronization of animations, which the AnimatedModel does not do on its own.
Note that AnimationController does not by default stop non-looping animations automatically once they reach the end, so their final pose will stay in effect. Rather they must either be stopped manually, or the \ref AnimationController::SetAutoFade "SetAutoFade()" function can be used to make them automatically fade out once reaching the end.
\section SkeletalAnimation_Blending Animation blending
%Animation blending uses the concept of numbered layers. Layer numbers are unsigned 8-bit integers, and the active \ref AnimationState "AnimationStates" on each layer are processed in order from the lowest layer to the highest. As animations are applied by lerp-blending between absolute bone transforms, the effect is that the higher layer numbers have higher priority, as they will remain in effect last.
By default an Animation is played back by using all the available bone tracks. However an animation can be only partially applied by setting a start bone, see \ref AnimationState::SetStartBone "SetStartBone()". Once set, the bone tracks will be applied hierarchically starting from the start bone. For example, to apply an animation only to a bipedal character's upper body, which is typically parented to the spine bone, one could set the spine as the start bone.
It is also possible to enable additive (difference) blending mode on an animation, by using \ref AnimationState::SetBlendMode "SetBlendMode()" with the ABM_ADDITIVE parameter. In this mode the AnimationState applies a difference of the animation pose to the model's base pose, instead of straightforward lerp blending. This allows an animation to be applied "on top" of the other animations, but the end result can be unpredictable in case of large difference from the base pose. Additive animations should reside on higher priority layers than lerp blended animations or otherwise the lerp blending will "blend out" the additive animation.
\section SkeletalAnimation_Triggers Animation triggers
Animations can be accompanied with trigger data that contains timestamped Variant data to be interpreted by the application. This trigger data is in XML format next to the animation file itself. When an animation contains triggers, the AnimatedModel's scene node sends the E_ANIMATIONTRIGGER event each time a trigger point is crossed. The event data contains the timestamp, the animation name, and the variant data. Triggers will fire when the animation is advanced using \ref AnimationState::AddTime "AddTime()", but not when setting the absolute animation time position.
The trigger data definition is below. Either normalized (0 = animation start, 1 = animation end) or non-normalized (time in seconds) timestamps can be used. See bin/Data/Models/Ninja_Walk.xml and bin/Data/Models/Ninja_Stealth.xml for examples; NinjaSnowWar implements footstep particle effects using animation triggers.
\code
<animation>
<trigger time="t" normalizedtime="t" type="Int|Bool|Float|String..." value="x" />
<trigger ... />
</animation>
\endcode
\section SkeletalAnimation_ManualControl Manual bone control
By default an AnimatedModel's bone nodes are reset on each frame, after which all active animation states are applied to the bones. This mechanism can be turned off per-bone basis to allow manual bone control. To do this, query a bone from the AnimatedModel's skeleton and set its \ref Bone::animated_ "animated_" member variable to false. For example:
\code
Bone* headBone = model->GetSkeleton().GetBone("Bip01_Head");
if (headBone)
headBone->animated_ = false;
\endcode
\section SkeletalAnimation_CombinedModels Combined skinned models
To create a combined skinned model from many parts (for example body + clothes), several AnimatedModel components can be created to the same scene node. These will then share the same bone nodes. The component that was first created will be the "master" model which drives the animations; the rest of the models will just skin themselves using the same bones. For this to work, all parts must have been authored from a compatible skeleton, with the same bone names. The master model should have all the bones required by the combined whole (for example a full biped), while the other models may omit unnecessary bones. Note that if the parts contain compatible vertex morphs (matching names), the vertex morph weights will also be controlled by the master model and copied to the rest.
\section SkeletalAnimation_NodeAnimation Node animations
Animations can also be applied outside of an AnimatedModel's bone hierarchy, to control the transforms of named nodes in the scene. The AssetImporter utility will automatically save node animations in both model or scene modes to the output file directory.
Like with skeletal animations, there are two ways to play back node animations:
- Instantiate an AnimationState yourself, using the constructor which takes a root scene node (animated nodes are searched for as children of this node) and an animation pointer. You need to manually advance its time position, and then call \ref AnimationState::Apply "Apply()" to apply to the scene nodes.
- Create an AnimationController component to the root scene node of the animation. This node should not contain an AnimatedModel component. Use the AnimationController to play back the animation just like you would play back a skeletal animation.
%Node animations do not support blending, as there is no initial pose to blend from. Instead they are always played back with full weight. Note that the scene node names in the animation and in the scene must match exactly, otherwise the animation will not play.
\page Particles Particle systems
The ParticleEmitter class derives from BillboardSet to implement a particle system that updates automatically.
The parameters of the particle system are stored in a ParticleEffect resource class, which uses XML format. Call \ref ParticleEmitter::SetEffect "SetEffect()" to assign the effect resource to the emitter. Most of the parameters can take either a single value, or minimum and maximum values to allow for random variation. See below for all supported parameters:
\code
<particleemitter>
<material name="MaterialName" />
<updateinvisible enable="true|false" />
<relative enable="true|false" />
<scaled enable="true|false" />
<sorted enable="true|false" />
<fixedscreensize enable="true|false" />
<emittertype value="sphere|box" />
<emittersize value="x y z" />
<emitterradius value="x" />
<direction min="x1 y1 z1" max="x2 y2 z2" />
<constantforce value="x y z" />
<dampingforce value="x" />
<activetime value="t" />
<inactivetime value="t" />
<interval min="t1" max="t2" />
<emissionrate min="t1" max="t2" />
<particlesize min="x1 y1" max="x2 y2" />
<timetolive min="t1" max="t2" />
<velocity min="x1" max="x2" />
<rotation min="x1" max="x2" />
<rotationspeed min="x1" max="x2" />
<sizedelta add="x" mul="y" />
<color value="r g b a" />
<colorfade color="r g b a" time="t" />
<texanim uv="u1 v1 u2 v2" time="t" />
</particleemitter>
\endcode
Notes:
- Zero active or inactive time period means infinite.
- Interval is the reciprocal of emission rate. Either can be used to define the rate at which new particles are emitted.
- Instead of defining a single color element, several colorfade elements can be defined in time order to describe how the particles change color over time.
- Use several texanim elements to define a texture animation for the particles.
\page Zones Zones
A Zone controls ambient lighting and fogging. Each geometry object determines the zone it is inside (by testing against the zone's oriented bounding box) and uses that zone's ambient light color, fog color and fog start/end distance for rendering. For the case of multiple overlapping zones, zones also have an integer priority value, and objects will choose the highest priority zone they touch.
The viewport will be initially cleared to the fog color of the zone found at the camera's far clip distance. If no zone is found either for the far clip or an object, a default zone with black ambient and fog color will be used.
Zones have three special flags: height fog mode, override mode and ambient gradient.
- When height fog mode is enabled, objects inside the zone receive height fog in addition of distance fog. The fog's \ref Zone::SetFogHeight "height level" is specified relative to the zone's world position. The width of the fog band on the Y-axis is specified by the \ref Zone::SetFogHeightScale "fog height scale" parameter.
- If the camera is inside a zone with override mode enabled, all rendered objects will use that zone's ambient and fog settings, instead of the zone they belong to. This can be used for example to implement an underwater effect.
- When ambient gradient mode is enabled, the zone's own ambient color value is not used, but instead it will look for two highest-priority neighbor zones that touch it at the minimum and maximum Z face of its oriented bounding box: any objects inside will then get a per-vertex ambient color fade between the neighbor zones' ambient colors. To ensure objects use the gradient zone when inside it, the gradient zone should have higher priority than the neighbor zones. The gradient is always oriented along the gradient zone's local Z axis.
Like lights, zones also define a lightmask and a shadowmask (with all bits set by default.) An object's final lightmask for light culling is determined by ANDing the object lightmask and the zone lightmask. The final shadowmask is also calculated in the same way.
Finally, zones can optionally define a texture, see \ref Zone::SetZoneTexture "SetZoneTexture()". This should be either a cube or 3D texture that will be bound to the zone texture unit (TU_ZONE) when rendering objects within the zone. This could be used to achieve for example precomputed environment reflections, ambient lighting or ambient occlusion in custom shaders; the default shaders do not use this texture. Due to texture unit limitations it is not available on OpenGL ES 2.0.
\page AuxiliaryViews Auxiliary views
Auxiliary views are viewports assigned to a RenderSurface by calling its \ref RenderSurface::SetViewport "SetViewport()" function. By default these will be rendered on each frame that the texture containing the surface is visible, and can be typically used to implement for example camera displays or reflections. The texture in question must have been created in rendertarget mode, see Texture's \ref Texture2D::SetSize "SetSize()" function.
The viewport is not assigned directly to the texture because of cube map support: a renderable cube map has 6 render surfaces, and done this way, a different camera could be assigned to each.
A "backup texture" can be assigned to the rendertarget texture: because it is illegal to sample a texture that is also being simultaneously rendered to (in cases where the texture becomes "recursively" visible in the auxiliary view), the backup texture can be used to specify which texture should be used in place instead.
Rendering detailed auxiliary views can easily have a large performance impact. Some things you can do for optimization with the auxiliary view camera:
- Set the far clip distance as small as possible.
- Use viewmasks on the camera and the scene objects to only render some of the objects in the auxiliary view.
- Use the camera's \ref Camera::SetViewOverrideFlags "SetViewOverrideFlags()" function to disable shadows, to disable occlusion, or force the lowest material quality.
The surface can also be configured to always update its viewports, or to only update when manually requested. See \ref RenderSurface::SetUpdateMode "SetUpdateMode()". For example an editor widget showing a rendered texture might use either of those modes. Call \ref RenderSurface::QueueUpdate "QueueUpdate()" to request a manual update of the surface on the current frame.
\page Input Input
The Input subsystem provides keyboard, mouse, joystick and touch input both via a polled interface and events. This subsystem is also used for querying whether the application window has input focus or is minimized.
The subsystem is always instantiated, even in headless mode, but is active only once the application window has been created. Once active, the subsystem takes over the operating system mouse cursor. It will be hidden by default, so the UI should be used to render a software cursor if necessary. For editor-like applications the operating system cursor can be made visible by calling \ref Input::SetMouseVisible "SetMouseVisible()".
The input events include:
- E_MOUSEBUTTONUP: a mouse button was released.
- E_MOUSEBUTTONDOWN: a mouse button was pressed.
- E_MOUSEMOVE: the mouse moved.
- E_MOUSEWHEEL: the mouse wheel moved.
- E_KEYUP: a key was released.
- E_KEYDOWN: a key was pressed.
- E_TEXTINPUT: a string of translated text input in UTF8 format. May contain a single character or several.
- E_JOYSTICKCONNECTED: a joystick was plugged in.
- E_JOYSTICKDISCONNECTED: a joystick was disconnected.
- E_JOYSTICKBUTTONDOWN: a joystick button was pressed.
- E_JOYSTICKBUTTONUP: a joystick button was released.
- E_JOYSTICKAXISMOVE: a joystick axis was moved.
- E_JOYSTICKHATMOVE: a joystick POV hat was moved.
- E_TOUCHBEGIN: a finger touched the screen.
- E_TOUCHEND: a finger was lifted from the screen.
- E_TOUCHMOVE: a finger moved on the screen.
- E_GESTURERECORDED : recording a touch gesture is complete.
- E_GESTUREINPUT : a touch gesture was recognized.
- E_MULTIGESTURE : a multi-finger pinch/rotation touch gesture is underway.
- E_DROPFILE : a file was drag-dropped on the application window.
- E_INPUTFOCUS : application input focus or window minimization state changed.
- E_MOUSEVISIBLECHANGED : the visibility of the operating system mouse cursor was changed.
- E_EXITREQUESTED : application exit was requested (eg. with the window close button.)
- E_INPUTBEGIN : input handling starts.
- E_INPUTEND : input handling ends.
- E_SDLRAWINPUT : raw SDL event is sent for customized event processing.
\section InputKeyboard Keyboard and mouse input
Key events include both the symbolic keycode ("Key") that depends on the keyboard layout, the layout- and operating system-independent SDL scancode ("Scancode"), and the true operating system-specific raw keycode ("Raw").
The input polling API differentiates between the initiation of a key/mouse button press, and holding the key or button down. \ref Input::GetKeyPress "GetKeyPress()" and \ref Input::GetMouseButtonPress "GetMouseButtonPress()" return true only for one frame (the initiation) while \ref Input::GetKeyDown "GetKeyDown()" and \ref Input::GetMouseButtonDown "GetMouseButtonDown()" return true as long as the key or button is held down. To check whether keys are down or pressed by scancode, use \ref Input::GetScancodeDown "GetScancodeDown()" and \ref Input::GetScancodePress "GetScancodePress()". Functions also exist for converting keycodes to scancodes or vice versa, or getting key names. See for example \ref Input::GetKeyName "GetKeyName()" and \ref Input::GetKeyFromScancode "GetKeyFromScancode()".
Mouse motion since the last frame can be accessed with \ref Input::GetMouseMove() "GetMouseMove()". The cursor position within the window can be queried with \ref Input::GetMousePosition "GetMousePosition()".
In AngelScript, the polling API is accessed via properties: input.keyDown[], input.keyPress[], input.scancodeDown[], input.scancodePress[], input.mouseButtonDown[], input.mouseButtonPress[], input.mouseMove, input.mousePosition.
\section InputMouseModes Mouse modes
The operating system mouse cursor can be used in four modes which can be switched with \ref Input::SetMouseMode() "SetMouseMode()".
- MM_ABSOLUTE is the default behaviour, allowing the toggling of operating system cursor visibility and allowing the cursor to escape the window when visible.
When the operating system cursor is invisible in absolute mouse mode, the mouse is confined to the window.
If the operating system and %UI cursors are both invisible, interaction with the \ref UI "user interface" will be limited (eg: drag move / drag end events will not trigger).
SetMouseMode(MM_ABSOLUTE) will call SetMouseGrabbed(false).
- MM_RELATIVE sets the operating system cursor to invisible and confines the cursor to the window.
The operating system cursor cannot be set to be visible in this mode via SetMouseVisible(), however changes are tracked and will be restored when another mouse mode is set.
When the virtual cursor is also invisible, %UI interaction will still function as normal (eg: drag events will trigger).
SetMouseMode(MM_RELATIVE) will call SetMouseGrabbed(true).
- MM_WRAP grabs the mouse from the operating system and confines the operating system cursor to the window, wrapping the cursor when it is near the edges.
SetMouseMode(MM_WRAP) will call SetMouseGrabbed(true).
- MM_FREE does not grab/confine the mouse cursor even when it is hidden. This can be used for cases where the cursor should render using the operating system
outside the window, and perform custom rendering (with SetMouseVisible(false)) inside.
\section InputJoystick Joystick input
Plugged in joysticks will begin sending input events automatically. Each joystick will be assigned a joystick ID which will be used in subsequent joystick events, as well as for retrieving the \ref JoystickState "joystick state". Use \ref Input::GetJoystick "GetJoystick()" to retrieve the joystick state by ID. In case you do not have the ID, you can also use \ref Input::GetJoystickByIndex() "GetJoystickByIndex()" which uses a zero-based index; see \ref Input::GetNumJoysticks "GetNumJoysticks()" for the number of currently connected joysticks. The ID, as well as the joystick name, can be looked up from the joystick state.
If the joystick model is recognized by SDL as a game controller the buttons and axes mappings utilize known constants such as CONTROLLER_BUTTON_A or CONTROLLER_AXIS_LEFTX without having to guess them. Use \ref JoystickState::IsController "IsController()" to distinguish between a game controller and an unrecognized joystick.
On platforms that support the accelerometer, it will appear as a "virtual" joystick.
\section InputTouch Touch input
On platforms where touch input is available, touch begin/end/move events will be sent, as well as multi-gesture events with pinch/rotation delta values when more than one finger is pressed down. The current finger touches can also be accessed via a polling API: \ref Input::GetNumTouches "GetNumTouches()" and \ref Input::GetTouch "GetTouch()".
Touch gestures can be recorded using SDL's inbuilt $1 gesture recognition system. Use \ref Input::RecordGesture "RecordGesture()" to start recording. The following finger movements will be recorded until the finger is lifted, at which point the recording ends and the E_GESTURERECORDED event is sent with the hash ID of the new gesture. The current in-memory gesture(s) can be saved or loaded as binary data using the \ref Input::SaveGestures "SaveGestures()", \ref Input::SaveGesture "SaveGesture()", \ref Input::LoadGestures "LoadGestures()" functions.
Whenever a recognized gesture is entered by the user, the E_GESTUREINPUT event will be sent. In addition to the ID of the best matched gesture, it contains the center position and an error metric (lower = better) to help filter out false gestures.
Note that all recorded (whether saved or not) and loaded gestures are held in-memory. Two additional functions are available to clear them: \ref Input::RemoveGesture "RemoveGesture()" to selectively clear a gesture by its ID and \ref Input::RemoveAllGestures "RemoveAllGestures()" to clear them all.
Touch input can also emulate a virtual joystick by displaying on-screen buttons. See the function \ref Input::AddScreenJoystick "AddScreenJoystick()".
Touch emulation can be used to test mobile applications on a desktop machine without a touch screen. See \ref Input::SetTouchEmulation "SetTouchEmulation()". When touch emulation is enabled, actual mouse events are no longer sent and the operating system mouse cursor is forced visible. The left mouse button acts as a moving finger, while the rest of the mouse buttons act as stationary fingers for multi-finger gestures. For example pressing down both left and right mouse buttons, then dragging the mouse with the buttons still pressed would emulate a two-finger pinch zoom-in gesture.
\section InputPlatformSpecific Platform-specific details
On platforms that support it (such as Android) an on-screen virtual keyboard can be shown or hidden. When shown, keypresses from the virtual keyboard will be sent as text input events just as if typed from an actual keyboard. Show or hide it by calling \ref Input::SetScreenKeyboardVisible "SetScreenKeyboardVisible()". The UI subsystem can also automatically show the virtual keyboard when a LineEdit element is focused, and hide it when defocused. This behavior can be controlled by calling \ref UI::SetUseScreenKeyboard "SetUseScreenKeyboard()".
On Windows the user must first touch the screen once before touch input is activated. Trying to record or load touch gestures will fail before that.
\page Audio Audio
The Audio subsystem implements an audio output stream. Once it has been initialized, the following operations are supported:
- Playing raw audio, Ogg Vorbis or WAV Sound resources using the SoundSource component. This allows manual stereo panning of mono sounds; stereo sounds will be output with their original stereo mix.
- Playing the above sound formats in pseudo-3D using the SoundSource3D component. It has stereo positioning and distance attenuation, but does not (at least yet) filter the sound depending on the direction.
A sound source component needs to be created into a Node to be considered "enabled" and be able to play, however that node does not need to belong to a scene (e.g. for positionless %UI / HUD sounds, which would be just unnecessarily clutter in a 3D scene, you can just instantiate a node in application code, similar to a camera existing outside the scene.)
To hear pseudo-3D positional sounds, a SoundListener component must likewise exist in a node and be assigned to the audio subsystem by calling \ref Audio::SetListener "SetListener()". The node's position & rotation define the listening spot. If the sound listener's node belongs to a scene, it only hears sounds from within that specific scene, but if it has been created outside of a scene it will hear any sounds.
The output is software mixed for an unlimited amount of simultaneous sounds. Ogg Vorbis sounds are decoded on the fly, and decoding them can be memory- and CPU-intensive, so WAV files are recommended when a large number of short sound effects need to be played.
For purposes of volume control, each SoundSource can be classified into a user defined group which is multiplied with a master category and the individual SoundSource gain set using \ref SoundSource::SetGain "SetGain()" for the final volume level.
To control the category volumes, use \ref Audio::SetMasterGain "SetMasterGain()", which defines the category if it didn't already exist.
Note that the Audio subsystem is always instantiated, but in headless mode the playback of sounds is simulated, taking the sound length and frequency into account. This allows basing logic on whether a specific sound is still playing or not, even in server code.
\section Audio_Parameters Sound parameters
A standard WAV file can not tell whether it should loop, and raw audio does not contain any header information. Parameters for the Sound resource can optionally be specified through an XML file that has the same name as the sound, but .xml extension. Possible elements and attributes are described below:
\code
<sound>
<format frequency="x" sixteenbit="true|false" stereo="true|false" />
<loop enable="true|false" start="x" end="x" />
</sound>
\endcode
The frequency is in Hz, and loop start and end are bytes from the start of audio data. If a loop is enabled without specifying the start and end, it is assumed to be the whole sound. Ogg Vorbis compressed sounds do not support specifying the loop range, only whether whole sound looping is enabled or disabled.
\section Audio_Stream Sound streaming
In addition to playing existing sound resources, sound can be generated during runtime using the SoundStream class and its subclasses. To start playback of a stream on a SoundSource, call \ref SoundSource::Play(SoundStream* stream) "Play(SoundStream* stream)".
%Sound streaming is used internally to implement on-the-fly Ogg Vorbis decoding. It is only available in C++ code and not scripting due to its low-level nature. See the SoundSynthesis C++ sample for an example of using the BufferedSoundStream subclass, which allows the sound data to be queued for playback from the main thread.
\section Audio_Events Audio events
A sound source will send the E_SOUNDFINISHED event through its scene node when the playback of a sound has ended. This can be used for example to know when to remove a temporary node created just for playing a sound effect, or for tying game events to sound playback.
\page Physics Physics
Urho3D implements rigid body physics simulation using the Bullet library.
To use, a PhysicsWorld component must first be created to the Scene.
The physics simulation has its own fixed update rate, which by default is 60Hz. When the rendering framerate is higher than the physics update rate, physics motion is interpolated so that it always appears smooth. The update rate can be changed with \ref PhysicsWorld::SetFps "SetFps()" function. The physics update rate also determines the frequency of fixed timestep scene logic updates. Hard limit for physics steps per frame or adaptive timestep can be configured with \ref PhysicsWorld::SetMaxSubSteps "SetMaxSubSteps()" function. These can help to prevent a "spiral of death" due to the CPU being unable to handle the physics load. However, note that using either can lead to time slowing down (when steps are limited) or inconsistent physics behavior (when using adaptive step.)
The other physics components are:
- RigidBody: a physics object instance. Its parameters include mass, linear/angular velocities, friction and restitution.
- CollisionShape: defines physics collision geometry. The supported shapes are box, sphere, cylinder, capsule, cone, triangle mesh, convex hull and heightfield terrain (requires the Terrain component in the same node.)
- Constraint: connects two RigidBodies together, or one RigidBody to a static point in the world. Point, hinge, slider and cone twist constraints are supported.
\section Physics_Movement Movement and collision
Both a RigidBody and at least one CollisionShape component must exist in a scene node for it to behave physically (a collision shape by itself does nothing.) Several collision shapes may exist in the same node to create compound shapes. An offset position and rotation relative to the node's transform can be specified for each. Triangle mesh and convex hull geometries require specifying a Model resource and the LOD level to use.
CollisionShape provides two APIs for defining the collision geometry. Either setting individual properties such as the \ref CollisionShape::SetShapeType "shape type" or \ref CollisionShape::SetSize "size", or specifying both the shape type and all its properties at once: see for example \ref CollisionShape::SetBox "SetBox()", \ref CollisionShape::SetCapsule "SetCapsule()" or \ref CollisionShape::SetTriangleMesh "SetTriangleMesh()".
RigidBodies can be either static or moving. A body is static if its mass is 0, and moving if the mass is greater than 0. Note that the triangle mesh collision shape is not supported for moving objects; it will not collide properly due to limitations in the Bullet library. In this case the convex hull or GImpact triangle mesh shape can be used instead.
The collision behaviour of a rigid body is controlled by several variables. First, the collision layer and mask define which other objects to collide with: see \ref RigidBody::SetCollisionLayer "SetCollisionLayer()" and \ref RigidBody::SetCollisionMask "SetCollisionMask()". By default a rigid body is on layer 1; the layer will be ANDed with the other body's collision mask to see if the collision should be reported. A rigid body can also be set to \ref RigidBody::SetTrigger "trigger mode" to only report collisions without actually applying collision forces. This can be used to implement trigger areas. Finally, the \ref RigidBody::SetFriction "friction", \ref RigidBody::SetRollingFriction "rolling friction" and \ref RigidBody::SetRestitution "restitution" coefficients (between 0 - 1) control how kinetic energy is transferred in the collisions. Note that rolling friction is by default zero, and if you want for example a sphere rolling on the floor to eventually stop, you need to set a non-zero rolling friction on both the sphere and floor rigid bodies.
By default rigid bodies can move and rotate about all 3 coordinate axes when forces are applied. To limit the movement, use \ref RigidBody::SetLinearFactor "SetLinearFactor()" and \ref RigidBody::SetAngularFactor "SetAngularFactor()" and set the axes you wish to use to 1 and those you do not wish to use to 0. For example moving humanoid characters are often represented by a capsule shape: to ensure they stay upright and only rotate when you explicitly set the rotation in code, set the angular factor to 0, 0, 0.
To prevent tunneling of a fast moving rigid body through obstacles, continuous collision detection can be used. It approximates the object as a swept sphere, but has a performance cost, so it should be used only when necessary. Call \ref RigidBody::SetCcdRadius "SetCcdRadius()" and \ref RigidBody::SetCcdMotionThreshold "SetCcdMotionThreshold()" with non-zero values to enable. To prevent false collisions, the body's actual collision shape should completely contain the radius. The motion threshold is the required motion per simulation step for CCD to kick in: for example a box with size 1 should have motion threshold 1 as well.
All physics calculations are performed in world space. Nodes containing a RigidBody component should preferably be parented to the Scene (root node) to ensure independent motion. For ragdolls this is not absolute, as retaining proper bone hierarchy is more important, but be aware that the ragdoll bones may drift far from the animated model's root scene node.
When several collision shapes are present in the same node, edits to them can cause redundant mass/inertia update computation in the RigidBody. To optimize performance in these cases, the edits can be enclosed between calls to \ref RigidBody::DisableMassUpdate "DisableMassUpdate()" and \ref RigidBody::EnableMassUpdate "EnableMassUpdate()".
\section Physics_ConstraintParameters Constraint parameters
%Constraint position (and rotation if relevant) need to be defined in relation to both connected bodies, see \ref Constraint::SetPosition "SetPosition()" and \ref Constraint::SetOtherPosition "SetOtherPosition()". If the constraint connects a body to the static world, then the "other body position" and "other body rotation" mean the static end's transform in world space. There is also a helper function \ref Constraint::SetWorldPosition "SetWorldPosition()" to assign the constraint to a world-space position; this sets both relative positions.
Specifying the constraint's motion axis instead of rotation is provided as an alternative as it can be more intuitive, see \ref Constraint::SetAxis "SetAxis()". However, by explicitly specifying a rotation you can be sure the constraint is oriented precisely as you want.
Hinge, slider and cone twist constraints support defining limits for the motion. To be generic, these are encoded slightly unintuitively into Vector2's. For a hinge constraint, the low and high limit X coordinates define the minimum and maximum angle in degrees. For example -45 to 45. For a slider constraint, the X coordinates define the maximum linear motion in world space units, and the Y coordinates define maximum angular motion in degrees. The cone twist constraint uses only the high limit to define the maximum angles (minimum angle is always -maximum) in the following manner: The X coordinate is the limit of the twist (main) axis, while Y is the limit of the swinging motion about the other axes.
\section Physics_Events Physics events
The physics world sends 8 types of events during its update step:
- E_PHYSICSPRESTEP before the simulation is stepped.
- E_PHYSICSCOLLISIONSTART for each new collision during the simulation step. The participating scene nodes will also send E_NODECOLLISIONSTART events.
- E_PHYSICSCOLLISION for each ongoing collision during the simulation step. The participating scene nodes will also send E_NODECOLLISION events.
- E_PHYSICSCOLLISIONEND for each collision which has ceased. The participating scene nodes will also send E_NODECOLLISIONEND events.
- E_PHYSICSPOSTSTEP after the simulation has been stepped.
Note that if the rendering framerate is high, the physics might not be stepped at all on each frame: in that case those events will not be sent.
\section Physics_Collision Reading collision events
A new or ongoing physics collision event will report the collided scene nodes and rigid bodies, whether either of the bodies is a trigger, and the list of contact points.
The contact points are encoded in a byte buffer, which can be read using the VectorBuffer or MemoryBuffer helper class. The following structure repeats for each contact:
- World-space position (Vector3)
- Normal vector (Vector3)
- Distance, negative when interpenetrating (float)
- Impulse applied in collision (float)
An example of reading collision event and contact point data in script, from NinjaSnowWar game object collision handling code:
\code
void HandleNodeCollision(StringHash eventType, VariantMap& eventData)
{
Node@ otherNode = eventData["OtherNode"].GetPtr();
RigidBody@ otherBody = eventData["OtherBody"].GetPtr();
VectorBuffer contacts = eventData["Contacts"].GetBuffer();
while (!contacts.eof)
{
Vector3 contactPosition = contacts.ReadVector3();
Vector3 contactNormal = contacts.ReadVector3();
float contactDistance = contacts.ReadFloat();
float contactImpulse = contacts.ReadFloat();
// Do something with the contact data...
}
}
\endcode
\section Physics_Queries Physics queries
The following queries into the physics world are provided:
- Raycasts, see \ref PhysicsWorld::Raycast "Raycast()" and \ref PhysicsWorld::RaycastSingle "RaycastSingle()".
- %Sphere cast (raycast with thickness), see \ref PhysicsWorld::SphereCast "SphereCast()".
- %Sphere and box overlap tests, see \ref PhysicsWorld::GetRigidBodies() "GetRigidBodies()".
- Which other rigid bodies are colliding with a body, see \ref RigidBody::GetCollidingBodies() "GetCollidingBodies()". In script this maps into the collidingBodies property.
\page Navigation Navigation
Urho3D implements navigation mesh generation and pathfinding by using the Recast & Detour libraries.
The navigation functionality is exposed through the NavigationMesh and Navigable components.
NavigationMesh collects geometry from its child nodes that have been tagged with the Navigable component. By default the Navigable component behaves recursively: geometry from its child nodes will be collected too, unless the recursion is disabled. If possible, physics CollisionShape geometry is preferred, however only the triangle mesh, convex hull and box shapes are supported. If no suitable physics geometry is found from a node, static drawable geometry is used instead from StaticModel and TerrainPatch components if they exist. The LOD level used is the same as for occlusion and raycasts (see \ref StaticModel::SetOcclusionLodLevel "SetOcclusionLodLevel()".
The easiest way to make the whole scene participate in navigation mesh generation is to create the %NavigationMesh and %Navigable components to the scene root node.
The navigation mesh generation must be triggered manually by calling \ref NavigationMesh::Build "Build()". After the initial build, portions of the mesh can also be rebuilt by specifying a world bounding box for the volume to be rebuilt, but this can not expand the total bounding box size. Once the navigation mesh is built, it will be serialized and deserialized with the scene.
To query for a path between start and end points on the navigation mesh, call \ref NavigationMesh::FindPath "FindPath()".
For a demonstration of the navigation capabilities, check the related sample application (15_Navigation), which features partial navigation mesh rebuilds (objects can be created and deleted) and querying paths.
Navigation meshes may be generated using either Watershed or Monotone triangulation. Watershed will typically produce more polygons that produce more natural paths while monotone is faster to generate but may produce undesirable path artifacts.
\section PathfindingWeights Pathfinding weights
Axis-aligned box regions of the navigation mesh may marked as belonging to specific 'area types' using NavArea components. The weight for a given area type is assigned using the SetAreaCost(unsigned, float) method of the NavigationMesh (or DynamicNavigationMesh).
\section DynamicNavMesh Dynamic Navigation Mesh
The DetourTileCache library is used to provide a variant of the NavigationMesh that supports the addition and removal of dynamic obstacles, the trade-off for which is almost twice the memory consumption. However, the addition and removal of obstacles is significantly faster than partially rebuilding a NavigationMesh.
Obstacles are limited to cylindrical shapes consisting of a radius and height. When an obstacle is added (or enabled) DetourTileCache will use a stored copy of the obstacle free DynamicNavigationMesh to regenerate the relevant tiles.
Changes that cannot be represented in the form of obstacles will require a partial rebuild using the Build() method and have no advantages over rebuilds of the standard NavigationMesh.
In all other facets the usage of the DynamicNavigationMesh is identical to that of the regular NavigationMesh. See the 39_CrowdNavigation sample application for usage of Obstacles and the DynamicNavigationMesh.
\section CrowdNavigation Crowd navigation
In addition to the basic pathfinding, additional navigation components use the DetourCrowd library for pathfinding between crowd agents and possibility to add dynamic obstacles. These components are DynamicNavigationMesh, CrowdManager, CrowdAgent, NavArea & Obstacle.
CrowdAgent's navigate using "targets" which is assigned with the SetMoveTarget(Vector3) method. The agent will halt upon reaching its' destination. To halt an agent in progress set it's move target to its current position.
CrowdAgent's will fire events under different circumstances. The most important of which is the CrowdAgentFailureEvent which will be sent if the CrowdAgent is either in an invalid state (such as off of the NavigationMesh) or if the target destination is no longer reachable through the NavigationMesh. As the CrowdAgent moves through space it may send other events regarding it's current state using the CrowdAgentStateChanged event which will include state as CROWD_AGENT_TARGET_ARRIVED when the agent has reach the target destination.
CrowdAgents' handle navigation areas differently. The CrowdManager can contains 16 different "Filter types" (0 - 15) which have different settings for area costs. These costs are assigned in the CrowdManager using the SetAreaCost(unsigned filterTypeID, unsigned areaID, float weight) method. The filter the CrowdAgent will use is assigned to the agent using its' SetNavigationFilterType(unsigned filterTypeID) method.
See the 39_CrowdNavigation sample application for an example on how to use CrowdAgents and the CrowdManager.
\page IK Inverse Kinematics
\section ikoverview Overview
IK (Inverse kinematics) can be useful in many situations ranging from
procedural animation to small adjustments of animation. Simply put, IK is used
when you want to position the tips of a hierarchichal structure at a known
location and need to calculate all of the rotations of the parent joints to
achieve this.
Examples include: Moving a hand to pick up an object, or adjusting a foot so
it's always touching the ground, regardless of incline.
\section ikterminology Terminology
It is helpful to know some of the terminology used when talking about inverse
kinematics, so you can better understand what's being said.
- The upper-most node in a solver's tree is known as the \a base \a node or \a root \a node.
- Nodes which need to be moved to a specific target location (example: The hand of a human) are called an \a end \a effector.
- Joints from which multiple children branch off from are called a \a sub-base \a node.
- IK solvers work most efficiently on single "strings" of nodes, which are referred to as \a chains.
- The entire structure (in the case of a human with two hands) is called a \a chain \a tree.
\section ikendeffectors End Effectors
Effectors are used to set the target position and rotation of a node. You can
create one by attaching the IKEffector component to a node.
\code{.cpp}
Node* handNode = modelNode->GetChild("Hand.R", true);
IKEffector* effector = handNode->CreateComponent<IKEffector>(); // C++
IKEffector@ effector = handNode.CreateComponent("IKEffector"); // AngelScript
local effector = handNode:CreateComponent("IKEffector") -- Lua
\endcode
You can then give the effector a target position (for example, the position of
an apple) using IKEffector::SetTargetPosition or you can tell the effector to
automatically track the position of a node with IKEffector::SetTargetNode.
\code{.cpp}
effector->SetTargetPosition(appleNode->GetWorldPosition()); // C++
effector.targetPosition = appleNode.worldPosition; // AngelScript
effector.targetPosition = appleNode.worldPosition -- Lua
\endcode
If enabled, you can also tell the effector to try and match a target rotation
using IKEffector::SetTargetRotation. This is useful if you want your hand to
point in a particular direction in relation to the apple (this feature needs
to be enabled in the solver, which is discussed futher below).
The target position and rotation are both set in global space.
Another important parameter is the \a chain \a length,
IKEffector::SetChainLength. A chain length of 1 means a single segment or
"bone" is affected. Arms and legs typically use a value of 2 (because you only
want to solve for the arm and not the entire body). The default value is 0,
which means all nodes right down to the base node are affected.
\code{.cpp}
effector->SetChainLength(2); // Humans have two bones in their arms
effector.chainLength = 2; // AngelScript
effector.chainLength = 2 -- Lua
\endcode
Effectors have a \a weight parameter (use IKEffector::SetWeight) indicating
how much influence they have on the tree to be solved. You can make use of
this to smoothly transition in and out of IK solutions. This will be required
for when your character begins picking up an object and you want to smoothly
switch from animation to IK.
\code{.cpp}
effector->SetWeight(SomeSplineFunction()); // C++
effector.weight = SomeSplineFunction(); // AngelScript
effector.weight = SomeSplineFunction(); -- Lua
\endcode
If you've played around with the weight parameter, you may have noticed that
it causes a \a linear interpolation of the target position. This can look bad
on organic creatures, especially when the solved tree is far apart from the
original tree. You might consider enabling \a nlerp, which causes the weight
to rotate around the next sub-base joint. This feature can be enabled with
IKEffector::SetFeature.
\code{.cpp}
effector->SetFeature(IKEffector::WEIGHT_NLERP, true); // C++
effector.WEIGHT_NLERP = true; // AngelScript
effector.WEIGHT_NLERP = true -- Lua
\endcode
Note that the script bindings work a little differently in this regard. The
features can be enabled/disabled directly on the effector object as attributes
rather than having to call SetFeature. This is for convenience (but may be
changed in the future due to script API inconsistency).
\section iksolvers Solvers
Solvers are responsible for calculating a solution based on the attached
effectors and their settings.
Note that solvers will \a only ever affect nodes that are in their subtree.
This means that if you attach an IKEffector to a node that is a parent of the
IKSolver node, then the solver will ignore this effector.
You can create a solver by attaching an IKSolver component to a node:
\code{.cpp}
IKSolver* solver = modelNode->CreateComponent<IKSolver>(); // C++
IKSolver@ solver = modelNode.CreateComponent("IKSolver"); // AngelScript
local solver = modelNode:CreateComponent("IKSolver") -- Lua
\endcode
The first thing you'll want to do is select the appropriate algorithm. As of
this writing, there are 3 algorithms to choose from, and you should favour
them in the order listed here.
- ONE_BONE: A specialized solver designed to point an object at a target position (such as eyes, heads, etc.)
- TWO_BONE: A specialized solver that calculates a direct solution using trigonometry, specifically for two-bone structures (arms, legs)
- FABRIK: A generic solver capable of solving anything you can throw at it. It uses an iterative algorithm and is thus a bit slower than the two specialized algorithms. Should be used for structures with 3 or more bones, or structures with multiple end effectors.
You can set the algorithm using:
\code{.cpp}
solver->SetAlgorithm(IKSolver::FABRIK); // C++
solver.algorithm = IKAlgorithm::FABRIK; // AngelScript
solver.algorithm = IKSolver.FABRIK -- Lua
\endcode
If you chose an iterative algorithm, then you might also want to tweak the
maximum number of iterations and the tolerance. FABRIK converges very quickly
and works well with 20 or less iterations. Sometimes you can even get away
with just 5 iterations. The tolerance specifies the maximum distance an end
effector is allowed to be from its target. Obviously, the smaller you set the
tolerance, the more iterations will be required. Good starting values for
tolerance are about 100th the size of the chain you're solving (e.g. if your
chain is 2 units long, then set the tolerance to 0.02).
\code{.cpp}
solver->SetMaximumIterations(20); // Good starting value for FABRIK
solver->SetTolerance(0.02); // Good value is 100th of your chain length.
solver.maximumIterations = 20; // AngelScript
solver.tolerance = 0.02;
solver.maximumIterations = 20 -- Lua
solver.tolerance = 0.02
\endcode
Note that these settings do nothing if you have selected a direct solver (such
as TWO_BONE or ONE_BONE).
\section iksolverfeatures Solver Features
There are a number of features that can be enabled/disabled on the solver, all
of which can be toggled by using IKSolver::SetFeature and checked with
IKSolver::GetFeature. You can always look at the documentation of
IKSolver::Feature for a detailed description of each feature.
\subsection iksolverautosolve AUTO_SOLVE
By default, the solver will be in \a auto \a solve mode. This means that it
will automatically perform its calculations for you in response to
E_SCENEDRAWABLEUPDATEFINISHED. All you have to do is create your effectors,
set their target positions, and let the solver handle the rest. You can
override this behaviour by disabling the AUTO_SOLVE feature, in which case you
will have to call IKSolver::Solve manually for it to do anything. This may be
desired if you want to "hook in" right between when the animation has updated,
but before inverse kinematics is calculated.
\code{.cpp}
solver->SetFeature(IKSolver::AUTO_SOLVE, false); // C++
solver.AUTO_SOLVE = false; // AngelScript
solver.AUTO_SOLVE = false -- Lua
\endcode
And here's how you manually invoke the solver.
\code{.cpp}
// C++
void MyLogic::Setup()
{
SubscribeToEvent(GetScene(), E_SCENEDRAWABLEUPDATEFINISHED, URHO3D_HANDLER(MyLogic, HandleSceneDrawableUpdateFinished));
}
void MyLogic::HandleSceneDrawableUpdateFinished(StringHash eventType, VariantMap& eventData)
{
GetComponent<IKSolver>()->Solve();
}
// AngelScript
void Setup()
{
SubscribeToEvent("SceneDrawableUpdateFinished", "HandleSceneDrawableUpdateFinished")
}
void HandleSceneDrawableUpdateFinished(StringHash eventType, VariantMap& eventData)
{
GetComponent("IKSolver").Solve()
}
// Lua
function Setup()
SubscribeToEvent("SceneDrawableUpdateFinished", "HandleSceneDrawableUpdateFinished")
end
function HandleSceneDrawableUpdateFinished(eventType, eventData)
GetComponent("IKSolver"):Solve()
end
\endcode
\subsection iksolverjointrotations IKSolver::JOINT_ROTATIONS
\code{.cpp}
solver->SetFeature(IKSolver::JOINT_ROTATIONS, false); // C++
solver.JOINT_ROTATIONS = false; // AngelScript
solver.JOINT_ROTATIONS = false -- Lua
\endcode
This is should be enabled if you are using IK on skinned models (or otherwise
node structures that need rotations). If you don't care about node rotations,
you can disable this feature and get a small performance boost.
When disabled, all nodes will simply keep their original orientation in the
world, only their positions will change.
The solver calculates joint rotations after the solution has converged by
comparing the solved tree with the original tree as a way to compute delta
angles. These are then multiplied by the original rotations to obtain the
final joint rotations.
\subsection iksolvertargetrotations TARGET_ROTATIONS
\code{.cpp}
solver->SetFeature(IKSolver::TARGET_ROTATIONS, false); // C++
solver.TARGET_ROTATIONS = false; // AngelScript
solver.TARGET_ROTATIONS = false -- Lua
\endcode
Enabling this will cause the orientation of the effector node
(IKEffector::SetTargetRotation) to be considered during solving. This means
that the effector node will try to match the rotation of the target as best as
possible. If the target is out of reach or just within reach, the chain will
reach out and start to ignore the target rotation in favour of reaching its
target.
Disabling this feature causes IKEffector::SetTargetRotation to have no effect.
\subsection iksolvertrees UPDATE_ORIGINAL_POSE, UPDATE_ACTIVE_POSE, and USE_ORIGINAL_POSE
These options can be quite confusing to understand.
The solver actually stores \a two \a trees, not one. There is an \a active \a
tree, which is kind of like the "workbench". The solver uses the active tree
for its initial condition but also writes the solution back into the active
tree (i.e. the tree is solved in-place, rather than cloning).
Then there is the \a original \a tree, which is set once during creation and
then never changed (at least not by default).
You can control which tree the solver should use for its initial condition. If
you enable USE_ORIGINAL_POSE, then the solver will first copy all
positions/rotations from the original tree into the active tree before
solving. Thus, the solution will tend to "snap back" into its original
configuration if it can.
If you disable USE_ORIGINAL_POSE, then the solver will use the active tree
instead. The active tree will contain whatever pose was solved last. Thus, the
solution will tend to be more "continuous".
Very important: Note that the active tree is NOT updated by Urho3D unless you
enable UPDATE_ACTIVE_POSE (this is enabled by default). If UPDATE_ACTIVE_POSE
is disabled, then any nodes that have moved outside of IKSolver's control will
effectively be \a ignored. Thus, if your model is animated, you very likely
want this enabled.
UPDATE_ORIGINAL_POSE isn't really required, but is here for debugging
purposes. You can update the original pose either by enabling this feature or
by explicitely calling IKSolver::ApplySceneToOriginalPose.
\subsection iksolverconstraints CONSTRAINTS
This feature is not yet implemented and is planned for a future release.
\page UI User interface
Urho3D implements a simple, hierarchical user interface system based on rectangular elements. The elements provided are:
- BorderImage: a texture image with an optional border
- Button: a pushbutton
- CheckBox: a button that can be toggled on/off
- Cursor: a mouse cursor
- DropDownList: shows a vertical list of items (optionally scrollable) as a popup
- LineEdit: a single-line text editor
- ListView: shows a scrollable vertical list of items
- Menu: a button which can show a popup element
- ScrollBar: a slider with back and forward buttons
- ScrollView: a scrollable view of child elements
- Slider: a horizontal or vertical slider bar
- Sprite: a texture image which supports subpixel positioning, scaling and rotating.
- Text: static text that can be multiline
- ToolTip: a popup which automatically displays itself when the cursor hovers on its parent element.
- UIElement: container for other elements, renders nothing by itself
- View3D: a window that renders a 3D viewport
- Window: a movable and resizable window
The root %UI element can be queried from the UI subsystem with the function \ref UI::GetRoot "GetRoot()". It is an empty canvas (UIElement) as large as the application window, into which other elements can be added.
Elements are added into each other similarly as scene nodes, using the \ref UIElement::AddChild "AddChild()" and \ref UIElement::RemoveChild "RemoveChild()" functions. Each %UI element has also a \ref UIElement::GetVars "user variables" VariantMap for storing custom data, and the possibility to add tags for identification: see \ref UIElement::AddTag "AddTag()", \ref UIElement::RemoveTag "RemoveTag()", \ref UIElement::SetTags "SetTags()" and \ref UIElement::GetChildrenWithTag "GetChildrenWithTag()".
To allow the elements react to mouse input, either a mouse cursor element must be defined using \ref UI::SetCursor "SetCursor()" or the operating system mouse cursor must be set visible from the Input subsystem.
\section UI_Textures UI textures
The BorderImage and elements deriving from it specify a texture and an absolute pixel rect within it to use for rendering; see \ref BorderImage::SetTexture "SetTexture()" and \ref BorderImage::SetImageRect "SetImageRect()". The texture is modulated with the element's color. To allow for more versatile scaling the element can be divided into 9 sub-quads or patches by specifying the width of each of its borders, see \ref BorderImage::SetBorder "SetBorder()". Setting zero borders (the default) causes the element to be drawn as one quad.
The absolute pixel rects interact poorly with the Renderer's texture quality setting, which reduces texture sizes by skipping the topmost mipmaps. Generating mipmaps is also often unnecessary for %UI textures, as they are usually displayed with 1:1 ratio on the screen. Therefore it's a good practice to use the following accompanying settings XML file for %UI textures to disable quality reduction and mipmaps:
\code
<texture>
<mipmap enable="false" />
<quality low="0" />
</texture>
\endcode
\section UI_Definition UI layout and style definition files
User interface elements derive from Serializable, so they can be serialized to/from XML using their attributes. There are two use cases for %UI definition files: either defining just the %UI element style (for example the image rects for a button's each state, or the font to use for a text) and leaving the actual position and dimensions to be filled in later, or fully defining an %UI element layout. The default element style definitions, used for example by the editor and the debug console, are in the file bin/Data/UI/DefaultStyle.xml.
The function \ref UI::LoadLayout "LoadLayout()" in UI will take an XML file and instantiate the elements defined in it. To be valid XML, there should be one root-level %UI element. An optional style XML file can be specified; the idea is to first read the element's style from that file, then fill in the rest from the actual layout XML file. This way the layout file can be relatively simple, as the majority of the data is already defined.
Note that a style can not be easily applied recursively to the loaded elements afterward. Therefore remember to specify the style file already when loading, or alternatively \ref UIElement::SetDefaultStyle "assign a default style file" to the %UI root element, which will then be picked up by all loaded layouts. This works because the %UI subsystem searches the style file by going up the parental chain starting from target parent %UI element. The search stops immediately when a style file is found or when it has reached the root element. Also note that Urho3D does not limit the number of style files being used at the same time in an application. You may have different style file set along the %UI parental hierarchy, if your application needs that.
See the elements' C++ code for all supported attributes, and look at the editor's user interface layouts in the bin/Data/UI directory for examples. You can also use the Editor application to create %UI layouts. The serialization format is similar to scene XML serialization, with three important differences:
1) The element type to instantiate, and the style to use for it can be set separately. For example the following element definition
\code
<element type="Button" style="CloseButton" />
\endcode
tells to instantiate a Button element, and that it should use the style "CloseButton" defined in the style XML file.
2) Internal child elements, for example the scroll bars of a ScrollView, need to be marked as such to avoid instantiating them as duplicates. This is done by adding the attribute internal="true" to the XML element, and is required in both layout and style XML files. Furthermore, the elements must be listed in the order they have been added as children of the parent element (if in doubt, see the element's C++ constructor code. Omitting elements in the middle is OK.) For example:
\code
<element type="ScrollView" />
<element type="ScrollBar" internal="true" />
...customize the horizontal scroll bar attributes here...
</element>
<element type="ScrollBar" internal="true" />
...customize the vertical scroll bar attributes here...
</element>
</element>
\endcode
3) The popup element shown by Menu and DropDownList is not an actual child element. In XML serialization, it is nevertheless stored as a child element, but is marked with the attribute popup="true".
\section UI_Programmatic Defining UI layouts programmatically
Instead of loading a %UI element hierarchy from a definition file, it is just as valid (though cumbersome for larger hierarchies) to create the elements in code, which is demonstrated by several of the Urho3D samples.
In this mode of operation, styles are not automatically applied when an element is added or created to the hierarchy, even if a parent element (or the %UI root) has a default style file assigned. This is because applying a style to an element means just setting a number of attributes and has potential to be "destructive" ie. overwrite the already set values. For each created element, you need to manually call either \ref UIElement::SetStyle "SetStyle()" to specify a style name that should be applied, or \ref UIElement::SetStyleAuto "SetStyleAuto()" to use the element's typename as the style name, e.g. the style "Button" for a Button element. In order for this to work, each %UI element should have a style file already effectively assigned either directly or via its parent element. A common mistake made by users is to construct child elements and attempt to style them inside a parent element's constructor but forgetting to explicitly assign a style file first in the constructor.
\section UI_Layouts Child element layouting
By default %UI elements operate in a "free" layout mode, where child elements' positions can be specified relative to any of the parent element corners, but they are not automatically positioned or resized.
To create automatically adjusting child layouts, the layout mode can be switched to either "horizontal" or "vertical". Now the child elements will be positioned left to right or top to bottom, based on the order in which they were added. They will be preferably resized to fit the parent element, taking into account their minimum and maximum sizes, but failing to do that, the parent element will be resized.
Left, top, right & bottom border widths and spacing between elements can also be specified for the layout. A grid layout is not directly supported, but it can be manually created with a horizontal layout inside a vertical layout, or vice versa.
Use the functions \ref UIElement::SetLayout "SetLayout()" or \ref UIElement::SetLayoutMode "SetLayoutMode()" to control the layouting.
\section UI_Anchoring Child element anchoring
A separate mechanism from layouting that allows automatically adjusting %UI hierarchies is to use anchoring. First enable anchoring in a child element with \ref UIElement::SetEnableAnchor "SetEnableAnchor()", after which the top-left and bottom-right corners in relation to the parent's size (range 0-1) can be set with \ref UIElement::SetMinAnchor "SetMinAnchor()" and \ref UIElement::SetMaxAnchor "SetMaxAnchor()". The corners can further be offset in pixels by calling \ref UIElement::SetMinOffset "SetMinOffset()" and \ref UIElement::SetMaxOffset "SetMaxOffset()". Finally note that instead of just setting horizontal / vertical alignment, the child element's pivot can also be expressed in a 0-1 range relative to its size by calling \ref UIElement::SetPivot "SetPivot()".
\section UI_Fonts Fonts
Urho3D supports both FreeType (.ttf, .otf) and \ref http://www.angelcode.com/products/bmfont/ "bitmap" fonts.
For FreeType fonts, it is possible to adjust the positioning of the font glyphs. See \ref Font::SetAbsoluteGlyphOffset "SetAbsoluteGlyphOffset()" to set a fixed pixel offset for all point sizes, or \ref Font::SetScaledGlyphOffset "SetScaledGlyphOffset()" to set a floating point offset that will be multiplied with the point size before applying. The offset information can be also stored in an accompanying XML file next to the font file, which is formatted in the following way: (it is legal to specify either or both of absolute and scaled offsets, and either or both of X & Y coordinates)
\code
<font>
<absoluteoffset x="xInt" y="yInt" />
<scaledoffset x="xFloat" y="yFloat" />
</font>
\endcode
The \ref UI "UI" class has various global configuration options for font rendering. The default settings are similar to Windows-style rendering, with crisp characters but uneven spacing. For macOS-style rendering, with accurate spacing but slightly blurrier outlines, call \ref UI::SetFontHintLevel "UI::SetFontHintLevel(FONT_HINT_LEVEL_NONE)". Use the Typography sample to explore these options and find the best configuration for your game.
By default, outline fonts will be hinted (aligned to pixel boundaries), using the font's embedded hints if possible. Call \ref UI::SetForceAutoHint "SetForceAutoHint()" to use FreeType's standard auto-hinter rather than each font's embedded hints.
To adjust hinting, call \ref UI::SetFontHintLevel "SetFontHintLevel()". If the level is set to FONT_HINT_LEVEL_LIGHT, fonts will be aligned to the pixel grid vertically, but not horizontally. At FONT_HINT_LEVEL_NONE, hinting is completely disabled.
Hint levels LIGHT and NONE allow for subpixel glyph positioning, which greatly improves spacing, especially at small font sizes. By default, subpixel positioning is only used at sizes up to 12 points; at larger sizes, each glyph is pixel-aligned. Call \ref UI::SetFontSubpixelThreshold "SetFontSubpixelThreshold" to change this threshold.
When textures aren't pixel-aligned, bilinear texture filtering makes them appear blurry. Therefore, subpixel font textures are oversampled (stretched horizontally) to reduce this blurriness, at the cost of extra memory usage. By default, subpixel fonts use 2x oversampling. Call \ref UI::SetFontOversampling "SetFontOversampling" to change this.
Subpixel positioning only operates horizontally. %Text is always pixel-aligned vertically.
\section UI_Sprites Sprites
Sprites are a special kind of %UI element that allow subpixel (float) positioning and scaling, as well as rotation, while the other elements use integer positioning for pixel-perfect display. Sprites can be used to implement rotating HUD elements such as minimaps or speedometer needles.
Due to the free transformability, sprites can not be reliably queried with \ref UI::GetElementAt "GetElementAt()". Also, only other sprites should be parented to sprites, as the other elements do not support scaling and rotation.
\section UI_Cursor_Shapes Cursor Shapes
Urho3D supports custom Cursor Shapes defined from an \ref Image.
The Shape can be an OS default from the \ref CursorShape "CursorShape" enum, which are automatically switched to and from by the \ref UI "UI" subsystem, but can be manually switched to using \ref Cursor::SetShape "Cursor::SetShape(CursorShape)".
Alternatively they can be defined using a name in String format to identify it, which can only be manually switched to and from using \ref Cursor::SetShape "Cursor::SetShape(const String&)".
There are a number of reserved names that are used for the OS defaults:
- Normal
- IBeam
- Cross
- ResizeVertical
- ResizeDiagonalTopRight
- ResizeHorizontal
- ResizeDiagonalTopLeft
- ResizeAll
- AcceptDrop
- RejectDrop
- Busy
- BusyArrow
Cursor Shapes can be define in a number of different ways:
XML:
\code
<element type="Cursor">
<attribute name="Shapes">
<variant type="VariantVector" >
<variant type="String" value="Normal" />
<variant type="ResourceRef" value="Image;Textures/UI.png" />
<variant type="IntRect" value="0 0 12 24" />
<variant type="IntVector2" value="0 0" />
</variant>
<variant type="VariantVector" >
<variant type="String" value="Custom" />
<variant type="ResourceRef" value="Image;Textures/UI.png" />
<variant type="IntRect" value="12 0 12 36" />
<variant type="IntVector2" value="0 0" />
</variant>
</atrribute>
</element>
\endcode
C++:
\code
UI* ui = GetSubsystem<UI>();
ResourceCache* rc = GetSubsystem<ResourceCache>();
Cursor* cursor = new Cursor(context_);
Image* image = rc->GetResource<Image>("Textures/UI.png");
if (image)
{
cursor->DefineShape(CS_NORMAL, image, IntRect(0, 0, 12, 24), IntVector2(0, 0));
cursor->DefineShape("Custom", image, IntRect(12, 0, 12, 36), IntVector2(0, 0));
}
ui->SetCursor(cursor);
\endcode
Angelcode:
\code
Cursor@ cursor = new Cursor();
Image@ image = cache.GetResource("Image", "Textures/UI.png");
if (image !is null)
{
cursor.DefineShape(CS_NORMAL, image, IntRect(0, 0, 12, 24), IntVector2(0, 0));
cursor.DefineShape("Custom", image, IntRect(12, 0, 12, 36), IntVector2(0, 0));
}
ui.SetCursor(cursor);
\endcode
\section UI_Scaling Scaling
By default the %UI is pixel perfect: the root element is sized equal to the application window size.
The pixel scaling can be changed with the functions \ref UI::SetScale "SetScale()", \ref UI::SetWidth "SetWidth()" and \ref UI::SetHeight "SetHeight()".
\page Urho2D Urho2D
In order to make 2D games in Urho3D, the Urho2D sublibrary is provided. Urho2D includes 2D graphics and 2D physics.
A typical 2D game setup would consist of the following:
- Create an orthographic camera
- Create some sprites
- Use physics and constraints to interact with the scene
\section Urho2D_Orthographic Orthographic camera
In order to use Urho2D we need to set camera to orthographic mode first; it can be done with following code:
C++:
\code
Node* cameraNode = scene_->CreateChild("Camera"); // Create camera node
Camera* camera = cameraNode->CreateComponent<Camera>(); // Create camera
camera->SetOrthographic(true); // Set camera orthographic
camera->SetOrthoSize((float)graphics->GetHeight() * PIXEL_SIZE); // Set camera ortho size (the value of PIXEL_SIZE is 0.01)
\endcode
AngelScript:
\code
Node@ cameraNode = scene_.CreateChild("Camera"); // Create camera node
Camera@ camera = cameraNode.CreateComponent("Camera"); // Create camera
camera.orthographic = true; // Set camera orthographic
camera.orthoSize = graphics.height * PIXEL_SIZE; // Set camera ortho size (the value of PIXEL_SIZE is 0.01)
\endcode
Lua:
\code
cameraNode = scene_:CreateChild("Camera") -- Create camera node
local camera = cameraNode:CreateComponent("Camera") -- Create camera
camera.orthographic = true -- Set camera orthographic
camera.orthoSize = graphics.height * PIXEL_SIZE -- Set camera ortho size (the value of PIXEL_SIZE is 0.01)
\endcode
To zoom in/out, use \ref Camera::SetZoom "SetZoom()".
\section Urho2D_Sprites Sprites
Urho2D provides a handful of classes for loading/drawing the kind of sprite required by your game. You can chose from animated sprites, 2D particle emitters and static sprites.
\section Urho2D_Animated Animated sprites
Workflow for creating animated sprites in Urho2D relies on Spriter (c). Spriter is a crossplatform tool for creating 2D animations. It comes both as an almost fully featured free version and a more advanced 'pro' version. Free version is available at http://www.brashmonkey.com/spriter.htm. To get started, scml files from bin/Data/Urho2D folder can be loaded in Spriter. Note that although currently Spriter doesn't support spritesheets/texture atlases, Urho2D does: you just have to use the same name for your scml file and your spritesheet's xml file (see \ref Urho2D_Static_Sprites "Static sprites" below for details on how to generate this file). Example 33_Urho2DSpriterAnimation is a good demonstration of this feature (scml file and xml spritesheet are both named 'imp' to instruct Urho2D to use the atlas instead of the individual files). You could remove every image files in the 'imp' folder and just keep 'imp_all.png' to test it out. However, keep your individual image files as they are still required if you want to later edit your scml project in Spriter.
A *.scml file is loaded using AnimationSet2D class (Resource) and rendered using AnimatedSprite2D class (Drawable component):
- AnimationSet2D: a Spriter *.scml file including one or more animations.
Each Spriter animation (Animation2D) contained in an AnimationSet2D can be accessed by its index and by its name (using \ref AnimationSet2D::GetAnimation "GetAnimation()").
- AnimatedSprite2D: component used to display a Spriter animation (Animation2D) from an AnimationSet2D. Equivalent to a 3D AnimatedModel. Animation2D animations inside the AnimationSet2D are accessed by their name (%String) using \ref AnimatedSprite2D::SetAnimation "SetAnimation()". Playback animation speed can be controlled using \ref AnimatedSprite2D::SetSpeed "SetSpeed()". Loop mode can be controlled using \ref AnimatedSprite2D::SetLoopMode "SetLoopMode()". You can use the default value set in Spriter (LM_DEFAULT) or make the animation repeat (LM_FORCE_LOOPED) or clamp (LM_FORCE_CLAMPED).
One interesting feature is the ability to flip/mirror animations on both axes, using \ref AnimatedSprite2D::SetFlip "SetFlip()", \ref AnimatedSprite2D::SetFlipX "SetFlipX()" or \ref AnimatedSprite2D::SetFlipY "SetFlipY()". Once flipped, the animation remains in that state until boolean state is restored to false. It is recommended to build your sprites centered in Spriter if you want to easily flip their animations and avoid using offsets for position and collision shapes.
- Animation2D (RefCounted): a Spriter animation from an AnimationSet2D. It allows readonly access to a given scml's animation name (\ref Animation2D::GetName "GetName()"), length (\ref Animation2D::GetLength "GetLength()") and loop state (\ref Animation2D::IsLooped "IsLooped()").
For a demonstration, check examples 33_Urho2DSpriterAnimation and 24_Urho2DSprite.
Tip for naming your files:
- if an xml file has the same name as an image file in the same repository, it is assumed that it is a texture parameter file
- if an xml file has the same name as a Spriter scml file in the same repository, it is assumed that it is a spritesheet file
So to prevent conflicts, scml and image files shouldn't have the exact same name.
\section Urho2D_Particle Particle emitters
A 2D particle emitter is built from a *.pex file (a format used by many 2D engines).
A *.pex file is loaded using ParticleEffect2D class (Resource) and rendered using ParticleEmitter2D class (Drawable component):
- ParticleEffect2D: a *.pex file defining the behavior and texture of a 2D particle (ParticleEmitter2D). For an example, see bin/Data/Urho2D/greenspiral.pex
- ParticleEmitter2D: used to display a ParticleEffect2D. Equivalent to a 3D ParticleEmitter.
For a demonstration, check example 25_Urho2DParticle.
'ParticleEditor2D' tool (https://github.com/aster2013/ParticleEditor2D) can be used to easily create pex files. And to get you started, many elaborate pex samples under friendly licenses are available on the web, mostly on Github (check ParticlePanda, Citrus %Engine, %Particle Designer, Flambe, Starling, CBL...)
\section Urho2D_Static_Sprites Static sprites
Static sprites are built from single image files or from spritesheets/texture atlases.
Single image files are loaded using Sprite2D class (Resource) and spritesheets/texture atlases are loaded using SpriteSheet2D class (Resource).
Both are rendered using StaticSprite2D class (Drawable component):
- Sprite2D: an image defined with texture, texture rectangle and hot spot.
- SpriteSheet2D: a texture atlas image (that packs multiple Sprite2D images).
- StaticSprite2D: used to display a Sprite2D. Equivalent to a 3D StaticModel.
Spritesheets can be created using tools like ShoeBox (http://renderhjs.net/shoebox/), darkFunction Editor (http://darkfunction.com/editor/), SpriteHelper (http://www.gamedevhelper.com/spriteHelper2Info.php), TexturePacker (https://www.codeandweb.com/texturepacker), ...
These tools will generate an image file and a xml file mapping coordinates and size for each individual image. Note that Urho2D uses same xml file format as Sparrow/Starling engines.
You can assign a material to an image by creating a xml parameter file named as the image and located in the same folder.
For example, to make the box sprite (bin/Data/Urho2D/Box.png) nearest filtered, create a file Box.xml next to it, with the following content:
\code
<texture>
<filter mode="nearest" />
</texture>
\endcode
The full list of texture parameters is documented \ref Materials "here".
To control sprite opacity, use \ref StaticSprite2D::SetAlpha() "SetAlpha()" (you can also tweak the color alpha using \ref StaticSprite2D::SetColor "SetColor()".)
By default, sprite hotspot is centered, but you can choose another hotspot if need be: use \ref StaticSprite2D::SetUseHotSpot "SetUseHotSpot()" and \ref StaticSprite2D::SetHotSpot "SetHotSpot()".
\section Urho2D_Background_and_Layers Background and layers
To set the background color for the scene, use \ref Renderer::GetDefaultZone "GetDefaultZone()" and \ref Zone::SetFogColor "SetFogColor()".
You can use different layers in order to simulate perspective. In this case you can use \ref Drawable2D::SetLayer "SetLayer()" and \ref Drawable2D::SetOrderInLayer "SetOrderInLayer()" to organise your sprites and arrange their display order.
Finally, note that you can easily mix both 2D and 3D resources. 3D assets' position need to be slightly offset on the Z axis (z=1 is enough), Camera's position needs to be slightly offset (on the Z axis) from 3D assets' max girth and a Light is required.
\section Urho2D_Physics Physics
Urho2D implements rigid body physics simulation using the Box2D library. You can refer to Box2D manual at http://box2d.org/manual.pdf for full reference.
PhysicsWorld2D class implements 2D physics simulation in Urho3D and is mandatory for 2D physics components such as RigidBody2D, CollisionShape2D or Constraint2D.
\section Urho2D_Rigidbodies_Components Rigid bodies components
RigidBody2D is the base class for 2D physics object instance.
Available rigid bodies (BodyType2D) are:
- BT_STATIC: a static body does not move under simulation and behaves as if it has infinite mass. Internally, Box2D stores zero for the mass and the inverse mass. Static bodies can be moved manually by the user. A static body has zero velocity. Static bodies do not collide with other static or kinematic bodies.
- BT_DYNAMIC: a dynamic body is fully simulated. It can be moved manually by the user, but normally it moves according to forces. A dynamic body can collide with all body types. A dynamic body always has finite, non-zero mass. If you try to set the mass of a dynamic body to zero, it will automatically acquire a mass of one kilogram.
- BT_KINEMATIC: a kinematic body moves under simulation according to its velocity. Kinematic bodies do not respond to forces. They can be moved manually by the user, but normally a kinematic body is moved by setting its velocity. A kinematic body behaves as if it has infinite mass, however, Box2D stores zero for the mass and the inverse mass. Kinematic bodies do not collide with other static or kinematic bodies.
You should establish the body type at creation, using \ref RigidBody2D::SetBodyType "SetBodyType()", because changing the body type later is expensive.
Rigid bodies can be moved/rotated by applying forces and impulses:
- linear force (progressive/gradual):
- \ref RigidBody2D::ApplyForce "ApplyForce()"
- \ref RigidBody2D::ApplyForceToCenter "ApplyForceToCenter()" (same as ApplyForce, the world point where to apply the force is set to center of mass, which prevents the body from rotating/spinning)
- linear or angular impulse (brutal/immediate):
- \ref RigidBody2D::ApplyLinearImpulse "ApplyLinearImpulse()"
- \ref RigidBody2D::ApplyAngularImpulse "ApplyAngularImpulse()"
- torque (angular force):
- \ref RigidBody2D::ApplyTorque "ApplyTorque()"
ApplyForce() and ApplyLinearImpulse() take two parameters: the direction of the force and where to apply it. Note that in order to improve performance, you can request the body to sleep by setting 'wake' parameter to false.
You can also directly set the linear or angular velocity of the body using \ref RigidBody2D::SetLinearVelocity "SetLinearVelocity()" or \ref RigidBody2D::SetAngularVelocity "SetAngularVelocity()". And you can get current velocity using \ref RigidBody2D::GetLinearVelocity "GetLinearVelocity()" or \ref RigidBody2D::GetAngularVelocity "GetAngularVelocity()".
To 'manually' move or rotate a body, simply translate or rotate the node to which it belongs to.
\section Urho2D_Collision_Shapes_Components Collision shapes components
Check Box2D manual - Chapter 4 Collision Module and Chapter 7 Fixtures for full reference.
\subsection Urho2D_Collision_Shapes_Shapes Shapes
- CollisionBox2D: defines 2D physics collision box. Box shapes have an optional position offset (\ref CollisionBox2D::SetCenter "SetCenter()"), width and height size (\ref CollisionBox2D::SetSize "SetSize()") and a rotation angle expressed in degrees (\ref CollisionBox2D::SetAngle "SetAngle()".). Boxes are solid, so if you need a hollow box shape then create one from a CollisionChain2D shape.
- Circle shapes <=> CollisionCircle2D: defines 2D physics collision circle. Circle shapes have an optional position offset (\ref CollisionCircle2D::SetCenter "SetCenter()") and a radius (\ref CollisionCircle2D::SetRadius "SetRadius()"). Circles are solid, you cannot make a hollow circle using the circle shape.
- Polygon shapes <=> CollisionPolygon2D: defines 2D physics collision polygon. Polygon shapes are solid convex polygons. A polygon is convex when all line segments connecting two points in the interior do not cross any edge of the polygon. A polygon must have 3 or more vertices (\ref CollisionPolygon2D::SetVertices "SetVertices()"). Polygons vertices winding doesn't matter.
- Edge shapes <=> CollisionEdge2D: defines 2D physics collision edge. Edge shapes are line segments defined by 2 vertices (\ref CollisionEdge2D::SetVertex1 "SetVertex1()" and \ref CollisionEdge2D::SetVertex2 "SetVertex2()" or globaly \ref CollisionEdge2D::SetVertices "SetVertices()"). They are provided to assist in making a free-form static environment for your game. A major limitation of edge shapes is that they can collide with circles and polygons but not with themselves. The collision algorithms used by Box2D require that at least one of two colliding shapes have volume. Edge shapes have no volume, so edge-edge collision is not possible.
- Chain shapes <=> CollisionChain2D: defines 2D physics collision chain. The chain shape provides an efficient way to connect many edges together (\ref CollisionChain2D::SetVertices "SetVertices()") to construct your static game worlds. You can connect chains together using ghost vertices. Self-intersection of chain shapes is not supported.
Several collision shapes may exist in the same node to create compound shapes. This can be handy to approximate complex or concave shapes.
Important: collision shapes must match your textures in order to be accurate. You can use Tiled's objects to create your shapes (see \ref Urho2D_TMX_Objects "Tile map objects"). Or you can use tools like Physics Body Editor (https://code.google.com/p/box2d-editor/), RUBE (https://www.iforce2d.net/rube/), LevelHelper (http://www.gamedevhelper.com/levelhelper/), PhysicsEditor (https://www.codeandweb.com/physicseditor), ... to help you. Other interesting tool is BisonKick (https://bisonkick.com/app/518195d06927101d38a83b66/).
Use \ref PhysicsWorld2D::SetDrawShape "SetDrawShape()" in combination with \ref PhysicsWorld2D::DrawDebugGeometry "DrawDebugGeometry()" to toggle shapes visibility.
\subsection Urho2D_Collision_Shapes_Fixtures Fixtures
Box2D fixtures are implemented through the CollisionShape2D base class for 2D physics collision shapes. Common parameters shared by every collision shape include:
- Density: \ref CollisionShape2D::SetDensity "SetDensity()" and \ref CollisionShape2D::GetDensity "GetDensity()"
- Friction: \ref CollisionShape2D::SetFriction "SetFriction()" and \ref CollisionShape2D::GetFriction "GetFriction()"
- Restitution (bounciness): \ref CollisionShape2D::SetRestitution "SetRestitution()" and \ref CollisionShape2D::GetRestitution "GetRestitution()"
CollisionShape2D class also provides readonly access to these properties:
- Mass: \ref CollisionShape2D::GetMass "GetMass()"
- Inertia: \ref CollisionShape2D::GetInertia "GetInertia()"
- Center of mass: \ref CollisionShape2D::GetMassCenter "GetMassCenter()"
\subsection Urho2D_Collision_Shapes_Filtering Collision filtering
Box2D supports collision filtering (restricting which other objects to collide with) using categories and groups:
- Collision categories:
- First assign the collision shape to a category, using \ref CollisionShape2D::SetCategoryBits "SetCategoryBits()". Sixteen categories are available.
- Then you can specify what other categories the given collision shape can collide with, using \ref CollisionShape2D::SetMaskBits "SetMaskBits()".
- Collision groups: positive and negative indices assigned using \ref CollisionShape2D::SetGroupIndex "SetGroupIndex()". All collision shapes within the same group index either always collide (positive index) or never collide (negative index).
Note that:
- collision group has higher precedence than collision category
- a collision shape on a static body can only collide with a dynamic body
- a collision shape on a kinematic body can only collide with a dynamic body
- collision shapes on the same body never collide with each other
\subsection Urho2D_Collision_Shapes_Sensors Sensors
A collision shape can be set to \ref CollisionShape2D::SetTrigger "trigger mode" to only report collisions without actually applying collision forces. This can be used to implement trigger areas. Note that:
- a sensor can be triggered only by dynamic bodies (BT_DYNAMIC)
- \ref Urho2D_Physics_Queries "physics queries" don't report triggers. To get notified when a sensor is triggered or cease to be triggered, subscribe to E_PHYSICSBEGINCONTACT2D and E_PHYSICSENDCONTACT2D \ref Urho2D_Physics_Events "physics events".
\section Urho2D_Constraints_Components Constraints components
Constraints ('joints' in Box2D terminology) are used to constrain bodies to an anchor point or between themselves. Apply a constraint to a node (called 'ownerBody') and use \ref Constraint::SetOtherBody "SetOtherBody()" to set the other node's body to be constrained to the ownerBody.
See 32_Urho2DConstraints sample for detailed examples and to help selecting the appropriate constraint. Following are the available constraints classes, with the indication of the corresponding 'joint' in Box2D manual (see Chapter 8 Joints):
- Constraint2D: base class for 2D physics constraints.
- Distance joint <=> ConstraintDistance2D: defines 2D physics distance constraint. The distance between two anchor points (\ref ConstraintDistance2D::SetOwnerBodyAnchor "SetOwnerBodyAnchor()" and \ref ConstraintDistance2D::SetOtherBodyAnchor "SetOtherBodyAnchor()") on two bodies is kept constant. The constraint can also be made soft, like a spring-damper connection. Softness is achieved by tuning frequency (\ref ConstraintDistance2D::SetFrequencyHz "SetFrequencyHz()" is below half of the timestep) and damping ratio (\ref ConstraintDistance2D::SetDampingRatio "SetDampingRatio()").
- Revolute joint <=> ConstraintRevolute2D: defines 2D physics revolute constraint. This constraint forces two bodies to share a common hinge anchor point (\ref ConstraintRevolute2D::SetAnchor "SetAnchor()"). You can control the relative rotation of the two bodies (the constraint angle) using a limit and/or a motor. A limit (\ref ConstraintRevolute2D::SetEnableLimit "SetEnableLimit()") forces the joint angle to remain between a lower (\ref ConstraintRevolute2D::SetLowerAngle "SetLowerAngle()") and upper (\ref ConstraintRevolute2D::SetUpperAngle "SetUpperAngle()") bound. The limit will apply as much torque as needed to make this happen. The limit range should include zero, otherwise the constraint will lurch when the simulation begins. A motor (\ref ConstraintRevolute2D::SetEnableMotor "SetEnableMotor()") allows you to specify the constraint speed (the time derivative of the angle). The speed (\ref ConstraintRevolute2D::SetMotorSpeed "SetMotorSpeed()") can be negative or positive. When the maximum torque (\ref ConstraintRevolute2D::SetMaxMotorTorque "SetMaxMotorTorque()") is exceeded, the joint will slow down and can even reverse. You can use a motor to simulate friction. Just set the joint speed to zero, and set the maximum torque to some small, but significant value. The motor will try to prevent the constraint from rotating, but will yield to a significant load.
- Prismatic joint <=> ConstraintPrismatic2D: defines 2D physics prismatic constraint. This constraint allows for relative translation of two bodies along a specified axis (\ref ConstraintPrismatic2D::SetAxis "SetAxis()"). There's no rotation applied. This constraint definition is similar to ConstraintRevolute2D description; just substitute translation for angle and force for torque.
- Pulley joint <=> ConstraintPulley2D: defines 2D physics pulley constraint. The pulley connects two bodies to ground (\ref ConstraintPulley2D::SetOwnerBodyGroundAnchor "SetOwnerBodyGroundAnchor()" and \ref ConstraintPulley2D::SetOtherBodyGroundAnchor "SetOtherBodyGroundAnchor()") and to each other (\ref ConstraintPulley2D::SetOwnerBodyAnchor "SetOwnerBodyAnchor()" and \ref ConstraintPulley2D::SetOtherBodyAnchor "SetOtherBodyAnchor()"). As one body goes up, the other goes down. You can supply a ratio (\ref ConstraintPulley2D::SetRatio "SetRatio()") that simulates a block and tackle. This causes one side of the pulley to extend faster than the other. At the same time the constraint force is smaller on one side than the other. You can use this to create mechanical leverage.
- Gear joint <=> ConstraintGear2D: defines 2D physics gear constraint. Used to create sophisticated mechanisms and saves from using compound shapes. This constraint can only connect ConstraintRevolute2Ds and/or ConstraintPrismatic2Ds (\ref ConstraintGear2D::SetOwnerConstraint "SetOwnerConstraint()" and \ref ConstraintGear2D::SetOtherConstraint "SetOtherConstraint()"). Like the pulley ratio, you can specify a gear ratio (\ref ConstraintGear2D::SetRatio "SetRatio()"). However, in this case the gear ratio can be negative.
- Mouse joint <=> ConstraintMouse2D: defines 2D physics mouse constraint. Used to manipulate bodies with the mouse, this constraint is almost used in every Box2D tutorial available on the net, to allow interacting with the 2D scene. It attempts to drive a point on a body towards the current position of the cursor. There is no restriction on rotation. This constraint has a target point, maximum force, frequency, and damping ratio. The target point (\ref ConstraintMouse2D::SetTarget "SetTarget()") initially coincides with the bodys anchor point. The maximum force (\ref ConstraintMouse2D::SetMaxForce "SetMaxForce()") is used to prevent violent reactions when multiple dynamic bodies interact. You can make this as large as you like. The frequency (\ref ConstraintMouse2D::SetFrequencyHz "SetFrequencyHz()") and damping ratio (\ref ConstraintMouse2D::SetDampingRatio "SetDampingRatio()") are used to create a spring/damper effect similar to the ConstraintDistance2D. Many users have tried to adapt the ConstraintMouse2D for game play. Users often want to achieve precise positioning and instantaneous response. The ConstraintMouse2D doesnt work very well in that context. You may wish to consider using kinematic bodies instead.
- Wheel joint <=> ConstraintWheel2D: defines 2D physics wheel constraint. This constraint restricts a point on bodyB (\ref ConstraintWheel2D::SetAnchor() "SetAnchor()") to a line on bodyA (\ref ConstraintWheel2D::SetAxis "SetAxis()"). It also provides a suspension spring.
- Weld joint <=> ConstraintWeld2D: defines 2D physics weld constraint. This constraint attempts to constrain all relative motion between two bodies.
- Rope joint <=> ConstraintRope2D: defines 2D physics rope constraint. This constraint restricts the maximum distance (\ref ConstraintRope2D::SetMaxLength "SetMaxLength()") between two points (\ref ConstraintRope2D::SetOwnerBodyAnchor "SetOwnerBodyAnchor()" and \ref ConstraintRope2D::SetOtherBodyAnchor "SetOtherBodyAnchor()"). This can be useful to prevent chains of bodies from stretching, even under high load.
- Friction joint <=> ConstraintFriction2D: defines 2D physics friction constraint. This constraint is used for top-down friction. It provides 2D translational friction (\ref ConstraintFriction2D::SetMaxForce "SetMaxForce()") and angular friction (\ref ConstraintFriction2D::SetMaxTorque "SetMaxTorque()").
- Motor joint <=> ConstraintMotor2D: defines 2D physics motor constraint. This constraint lets you control the motion of a body by specifying target position (\ref ConstraintMotor2D::SetLinearOffset "SetLinearOffset()") and rotation offsets (\ref ConstraintMotor2D::SetAngularOffset "SetAngularOffset()"). You can set the maximum motor force (\ref ConstraintMotor2D::SetMaxForce "SetMaxForce()") and torque (\ref ConstraintMotor2D::SetMaxTorque "SetMaxTorque()") that will be applied to reach the target position and rotation. If the body is blocked, it will stop and the contact forces will be proportional to the maximum motor force and torque.
Collision between bodies connected by a constraint can be enabled/disabled using \ref Constraint2D::SetCollideConnected "SetCollideConnected()".
Use \ref PhysicsWorld2D::SetDrawJoint "SetDrawJoint()" in combination with \ref PhysicsWorld2D::DrawDebugGeometry "DrawDebugGeometry()" to toggle joints visibility.
\section Urho2D_Physics_Queries Physics queries
The following queries into the physics world are provided:
\subsection Urho2D_Physics_Queries_Unary Unary geometric queries (queries on a single shape)
- Shape point test: test if a point is inside a given plain shape and returns the body if true. Use \ref PhysicsWorld2D::GetRigidBody() "GetRigidBody()". Point can be a Vector2 world position, or more conveniently you can pass screen coordinates when performing the test from an input (mouse, joystick, touch). Note that only plain shapes are supported, this test is not applicable to \ref CollisionChain2D and \ref CollisionEdge2D shapes.
- Shape ray cast: returns the body, distance, point of intersection (position) and normal vector for the first shape hit by the ray. Use \ref PhysicsWorld2D::RaycastSingle "RaycastSingle()".
\subsection Urho2D_Physics_Queries_Binary Binary functions
- Overlap between 2 shapes: not implemented
- Contact manifolds (contact points for overlaping shapes): not implemented
- Distance between 2 shapes: not implemented
- %Time of impact (time when 2 moving shapes collide): not implemented
\subsection Urho2D_Physics_Queries_World World queries (see Box2D manual - Chapter 10 World Class)
- AABB queries: return the bodies overlaping with the given rectangle. See \ref PhysicsWorld2D::GetRigidBodies "GetRigidBodies()".
- %Ray casts: return the body, distance, point of intersection (position) and normal vector for every shape hit by the ray. See \ref PhysicsWorld2D::Raycast "Raycast()".
\section Urho2D_Physics_Events Physics events
Contact listener (see Box2D manual, Chapter 9 Contacts) enables a given node to report contacts through events. Available events are:
- E_PHYSICSBEGINCONTACT2D ("PhysicsBeginContact2D" in script): called when 2 collision shapes begin to overlap
- E_PHYSICSENDCONTACT2D ("PhysicsEndContact2D" in script): called when 2 collision shapes cease to overlap
- E_PHYSICSPRESTEP2D ("PhysicsPreStep2D" in script): called after collision detection, but before collision resolution. This allows to disable the contact if need be (for example on a one-sided platform). Currently ineffective (only reports PhysicsWorld2D and time step)
- E_PHYSICSPOSTSTEP2D ("PhysicsPostStep2D" in script): used to gather collision impulse results. Currentlly ineffective (only reports PhysicsWorld2D and time step)
\section Urho2D_TileMap Tile maps
Tile maps workflow relies on the tmx file format, which is the native format of Tiled, a free app available at http://www.mapeditor.org/. It is strongly recommended to use stable release 0.9.1. Do not use daily builds or other newer/older stable revisions, otherwise results may be unpredictable.
Check example 36_Urho2DTileMap for a basic demonstration.
You can use tile maps for the design of the whole scene/level, or in adjunction to other 2D resources.
\subsection Urho2D_LoadingTMX "Loading" a TMX tile map file
A tmx file is loaded using \ref TmxFile2D "TmxFile2D resource class" and rendered using \ref TileMap2D "TileMap2D component class".
You just have to create a \ref TileMap2D "TileMap2D" component inside a node and then assign the tmx resource file to it.
C++:
\code
SharedPtr<Node> tileMapNode(scene_->CreateChild("TileMap")); // Create a standard Urho3D node
TileMap2D* tileMap = tileMapNode->CreateComponent<TileMap2D>(); // Create the TileMap2D component
tileMap->SetTmxFile(cache->GetResource<TmxFile2D>("Urho2D/isometric_grass_and_water.tmx")); // Assign tmx resource file to component
\endcode
AngelScript:
\code
Node@ tileMapNode = scene_.CreateChild("TileMap"); // Create a standard Urho3D node
TileMap2D@ tileMap = tileMapNode.CreateComponent("TileMap2D"); // Create the TileMap2D component
tileMap.tmxFile = cache.GetResource("TmxFile2D", "Urho2D/isometric_grass_and_water.tmx"); // Assign the tmx resource file to component
\endcode
Lua:
\code
local tileMapNode = scene_:CreateChild("TileMap") -- Create a standard Urho3D node
local tileMap = tileMapNode:CreateComponent("TileMap2D") -- Create the TileMap2D component
tileMap.tmxFile = cache:GetResource("TmxFile2D", "Urho2D/isometric_grass_and_water.tmx") -- Assign tmx resource file to component
\endcode
Note that:
- currently only XML Layer Format is supported (Base64 and CSV are not). In Tiled, go to Maps > Properties to set 'Layer Format' to 'XML'.
- if 'seams' between tiles are obvious then you should make your tilesets images nearest filtered (see \ref Urho2D_Static_Sprites "Static sprites" section above.)
\subsection Urho2D_TMX_maps TMX tile maps
Once a tmx file is loaded in Urho, use \ref TileMap2D::GetInfo "GetInfo()" to access the map properties through \ref TileMapInfo2D class.
A map is defined by its:
- orientation: Urho2D supports both orthogonal (flat) and isometric (strict iso 2.5D and staggered iso) tile maps. Orientation can be retrieved with \ref TileMapInfo2D::orientation_ "orientation_" attribute (O_ORTHOGONAL for ortho, O_ISOMETRIC for iso and O_STAGGERED for staggered)
- width and height expressed as a number of tiles in the map: use \ref TileMapInfo2D::width_ "width_" and \ref TileMapInfo2D::height_ "height_" attributes to access these values
- width and height expressed in Urho2D space: use \ref TileMapInfo2D::GetMapWidth "GetMapWidth()" and \ref TileMapInfo2D::GetMapHeight "GetMapHeight()" to access these values which are useful to set the camera's position for example
- tile width and tile height as the size in pixels of the tiles in the map (equates to Tiled width/height * PIXEL_SIZE): use \ref TileMapInfo2D::tileWidth_ "tileWidth_" and \ref TileMapInfo2D::tileHeight_ "tileHeight_" attributes to access these values
Two convenient functions are provided to convert Tiled index to/from Urho2D space:
- \ref TileMap2D::TileIndexToPosition "TileIndexToPosition()" to convert tile index to Urho position
- \ref TileMap2D::PositionToTileIndex "PositionToTileIndex()" to convert Urho position to tile index (returns false if position is outside of the map)
You can display debug geometry for the whole tile map using \ref TileMap2D::DrawDebugGeometry "DrawDebugGeometry()".
\subsection Urho2D_TMX_Tileset TMX tile map tilesets and tiles
A tile map is built from fixed-size sprites ('tiles', accessible from the Tile2D class) belonging to one or more 'tilesets' (=spritesheets). Each tile is characterized by its:
- grid ID (ID in the tileset, from top-left to bottom-right): use \ref Tile2D::GetGid "GetGid()"
- sprite/image (Sprite2D): use \ref Tile2D::GetSprite "GetSprite()"
- property: use \ref Tile2D::HasProperty "HasProperty()" and \ref Tile2D::GetProperty "GetProperty()"
Tiles from a tileset can only be accessed from one of the layers they are 'stamped' onto, using \ref TileMapLayer2D::GetTile "GetTile()" (see next section).
\subsection Urho2D_TMX_layers TMX tile map layers
A tile map is composed of a mix of ordered \ref TileMapLayer2D "layers". The number of layers contained in the tmx file is retrieved using \ref TileMap2D::GetNumLayers "GetNumLayers()".
Accessing layers : from a \ref TileMap2D "TileMap2D component", layers are accessed by their index from bottom (0) to top using \ref TileMap2D::GetLayer "GetLayer()" function.
A layer is characterized by its:
- name: currently not accessible
- width and height expressed as a number of tiles: use \ref TileMapLayer2D::GetWidth "GetWidth()" and \ref TileMapLayer2D::GetHeight "GetHeight()" to access these values
- type: retrieved using \ref TileMapLayer2D::GetLayerType "GetLayerType()" (returns the type of layer, a TileMapLayerType2D: Tile=LT_TILE_LAYER, %Object=LT_OBJECT_GROUP, %Image=LT_IMAGE_LAYER and Invalid=LT_INVALID)
- custom properties : use \ref TileMapLayer2D::HasProperty "HasProperty()" and \ref TileMapLayer2D::GetProperty "GetProperty()" to check/access these values
Layer visibility can be toggled using \ref TileMapLayer2D::SetVisible "SetVisible()" (and visibility state can be accessed with \ref TileMapLayer2D::IsVisible "IsVisible()"). Currently layer opacity is not implemented. Use \ref TileMapLayer2D::DrawDebugGeometry "DrawDebugGeometry()" to display debug geometry for a given layer.
By default, first tile map layer is drawn on scene layer 0 and subsequent layers are drawn in a 10 scene layers step. For example, if your tile map has 3 layers:
- bottom layer is drawn on layer 0
- middle layer is on layer 10
- top layer is on layer 20
You can override this default layering order by using \ref TileMapLayer2D::SetDrawOrder "SetDrawOrder()", and you can retrieve the order using \ref TileMapLayer2D::GetDrawOrder "GetDrawOrder()".
You can access a given tile node or tileset's tile (Tile2D) by its index (tile index is displayed at the bottom-left in Tiled and can be retrieved from position using \ref TileMap2D::PositionToTileIndex "PositionToTileIndex()"):
- to access a tile node, which enables access to the StaticSprite2D component, for example to remove it or replace it, use \ref TileMapLayer2D::GetTileNode "GetTileNode()"
- to access a tileset's Tile2D tile, which enables access to the Sprite2D resource, gid and custom properties (as mentioned \ref Urho2D_TMX_Tileset "above"), use \ref TileMapLayer2D::GetTile "GetTile()"
An %Image layer node or an %Object layer node are accessible using \ref TileMapLayer2D::GetImageNode "GetImageNode()" and \ref TileMapLayer2D::GetObjectNode "GetObjectNode()".
\subsection Urho2D_TMX_Objects TMX tile map objects
Tiled \ref TileMapObject2D "objects" are wire shapes (Rectangle, Ellipse, Polygon, Polyline) and sprites (Tile) that are freely positionable in the tile map.
Accessing Tiled objects : from a \ref TileMapLayer2D "TileMapLayer2D layer", objects are accessed by their index using \ref TileMapLayer2D::GetObject "GetObject()". \ref TileMapLayer2D::GetNumObjects "GetNumObjects()" returns the number of objects contained in the object layer (tile and image layers will return 0 as they don't hold objects).
Use \ref TileMapObject2D::GetObjectType "GetObjectType()" to get the nature of the selected object (TileMapObjectType2D: OT_RECTANGLE for Rectangle, OT_ELLIPSE for Ellipse, OT_POLYGON for Polygon, OT_POLYLINE for PolyLine, OT_TILE for Tile and OT_INVALID if not a valid object).
Objects' properties (Name and Type) can be accessed using respectively \ref TileMapObject2D::GetName "GetName()" and \ref TileMapObject2D::GetType "GetType()". Type can be useful to flag categories of objects in Tiled.
Except Tile, objects are not visible (although you can display them for debugging purpose using DrawDebugGeometry() at the level of the tile map or a given layer, as mentioned previously). They can be used:
- to easily design polygon sprites and Box2D shapes using the object's vertices: use \ref TileMapObject2D::GetNumPoints "GetNumPoints()" to get the number of vertices and \ref TileMapObject2D::GetPoint "GetPoint()" to iterate through the vertices
- as placeholders to easily set the position and size of entities in the world, using \ref TileMapObject2D::GetPosition "GetPosition()" and \ref TileMapObject2D::GetSize "GetSize()"
- to display Tile objects as sprites
- to create a background from Tile sprites
- etc.
Additionally Sprite2D resource from a Tile object is retrieved using \ref TileMapObject2D::GetTileSprite "GetTileSprite()".
If need be you can access the grid id (relative to the tilesets used) of a Tile object using \ref TileMapObject2D::GetTileGid "GetTileGid()".
\page Serialization Serialization
Classes that derive from Serializable can perform automatic serialization to binary or XML format by defining \ref AttributeInfo "attributes". Attributes are stored to the Context per class. %Scene load/save and network replication are both implemented by having the Node and Component classes derive from Serializable.
The supported attribute types are all those supported by Variant, excluding pointers and custom values.
Attributes can either refer to a member of the object or define setter & getter functions. Member attributes can also have post-set action: member function without arguments that is called every time when value is assigned to the attribute.
Zero-based enumerations are also supported, so that the enum values can be stored as text into XML files instead of just numbers.
The folowing macros can be used to define an attribute:
- `URHO3D_ATTRIBUTE`: Member of the object. Shall be convertible from and to specified type.
- `URHO3D_ATTRIBUTE_EX`: Member of the object. Post-set member function callback is called when attribute is set.
- `URHO3D_ACCESSOR_ATTRIBUTE`: Getter and setter member functions. Attribute value of specified type is passed into setter and expected to be returned from getter.
- `URHO3D_CUSTOM_ATTRIBUTE`: Getter and setter functional objects that are directly working with Variant value. See \ref MakeVariantAttributeAccessor "MakeVariantAttributeAccessor()"
- `URHO3D_ENUM_ATTRIBUTE`: 32-bit integer zero-based enumeration with human-readable names.
- `URHO3D_ENUM_ATTRIBUTE_EX`: The same as `URHO3D_ATTRIBUTE_EX`, used for enumerations.
- `URHO3D_ENUM_ACCESSOR_ATTRIBUTE`: The same as `URHO3D_ACCESSOR_ATTRIBUTE`, used for enumerations.
- `URHO3D_CUSTOM_ENUM_ATTRIBUTE`: The same as `URHO3D_CUSTOM_ATTRIBUTE`, used for enumerations.
To implement side effects to attributes, the default attribute access functions in Serializable can be overridden. See \ref Serializable::OnSetAttribute "OnSetAttribute()" and \ref Serializable::OnGetAttribute "OnGetAttribute()".
Each attribute can have a combination of the following flags:
- `AM_FILE`: Is used for file serialization (load/save.)
- `AM_NET`: Is used for network replication.
- `AM_LATESTDATA`: Frequently changing data for network replication, where only the latest values matter. Used for motion and animation.
- `AM_NOEDIT`: Is an internal attribute and is not to be shown for editing.
- `AM_NODEID`: Is a node ID and may need rewriting when instantiating scene content.
- `AM_COMPONENTID`: Is a component ID and may need rewriting when instantiating scene content.
The default flags are `AM_FILE` and `AM_NET`. Note that it is legal to define neither `AM_FILE` or `AM_NET`, meaning the attribute has only run-time significance (perhaps for editing.)
See the existing engine classes e.g. in the %Scene or %Graphics subdirectories for examples on registering attributes using the URHO3D_ATTRIBUTE family of helper macros.
\page Network Networking
The Network subsystem provides reliable and unreliable UDP messaging using SLikeNet. A server can be created that listens for incoming connections, and client connections can be made to the server. After connecting, code running on the server can assign the client into a scene to enable scene replication, provided that when connecting, the client specified a blank scene for receiving the updates.
%Scene replication is one-directional: the server always has authority and sends scene updates to the client at a fixed update rate, by default 30 FPS. The client responds by sending controls updates (buttons, yaw and pitch + possible extra data) also at a fixed rate.
Bidirectional communication between the server and the client can happen either using raw network messages, which are binary-serialized data, or remote events, which operate like ordinary events, but are processed on the receiving end only. Code on the server can send messages or remote events either to one client, all clients assigned into a particular scene, or to all connected clients. In contrast the client can only send messages or remote events to the server, not directly to other clients.
Note that if a particular networked application does not need scene replication, network messages and remote events can also be transmitted without assigning the client to a scene. The Chat example does just that: it does not create a scene either on the server or the client.
\section Network_Connecting Connecting to a server
Starting the server and connecting to it both happen through the Network subsystem. See \ref Network::StartServer "StartServer()" and \ref Network::Connect "Connect()". A UDP port must be chosen; the examples use the port 2345.
Note the scene (to be used for replication) and identity VariantMap supplied as parameters when connecting. The identity data can contain for example the user name or credentials, it is completely application-specified. The identity data is sent right after connecting and causes the E_CLIENTIDENTITY event to be sent on the server when received. By subscribing to this event, server code can examine incoming connections and accept or deny them. The default is to accept all connections.
After connecting successfully, client code can get the Connection object representing the server connection, see \ref Network::GetServerConnection "GetServerConnection()". Likewise, on the server a Connection object will be created for each connected client, and these can be iterated through. This object is used to send network messages or remote events to the remote peer, to assign the client into a scene (on the server only), or to disconnect.
\section Network_NAT_Punchtrough NAT punchtrough
It is possible to use NAT punchtrough functionality with network subsystem. This requires NAT punchtrough master server to be running on a public host. To set it up you first have to tell the networking subsystem the IP address or a domain name to NAT punchtrough master server, to do this, call \ref Network::SetNATServerInfo "SetNATServerInfo()".
Server:
Server should be started first by calling \ref Network::StartServer "StartServer()" and after that \ref Network::StartNATClient "StartNATClient()". If connection to NAT punchtrough master server succeeds, unique GUID will be generated. Clients should use this generated GUID to connect to this server.
Client:
When server has been successfully started and connected with NAT punchtrough master server, clients should connect to server by calling \ref Network::AttemptNATPunchtrough "AttemptNATPunchtrough()" and pass in the server generated GUID.
For a demonstration, check example 52_NATPunchtrough.
\section Network_Lan_Discovery Lan Discovery
Network subsystem supports LAN discovery mode to search for running servers. When creating server you can set what data should be sent to the network when someone starts LAN discovery mode, see \ref Network::SetDiscoveryBeacon "SetDiscoveryBeacon()". This data can contain any information about the server. To start LAN discovery mode you should call \ref Network::DiscoverHosts "DiscoverHosts()". When server is found, "NetworkHostDiscovered" event will be sent out.
For a demonstration, check example 53_LANDiscovery.
\section Network_Replication Scene replication
%Network replication of scene content has been implemented in a straightforward manner, using \ref Serialization "attributes". Nodes and components that have not been created in local mode - see the CreateMode parameter of \ref Node::CreateChild "CreateChild()" or \ref Node::CreateComponent "CreateComponent()" - will be automatically replicated. Note that a replicated component created into a local node will not be replicated, as the node's locality is checked first.
The CreateMode translates into two different node and component ID ranges - replicated ID's range from 0x1 to 0xffffff, while local ID's range from 0x1000000 to 0xffffffff. This means there is a maximum of 16777215 replicated nodes or components in a scene.
If the scene was originally loaded from a file on the server, the client will also load the scene from the same file first. In this case all predefined, static objects such as the world geometry should be defined as local nodes, so that they are not needlessly retransmitted through the network during the initial update, and do not exhaust the more limited replicated ID range.
The server can be made to transmit needed resource \ref PackageFile "packages" to the client. This requires attaching the package files to the Scene by calling \ref Scene::AddRequiredPackageFile "AddRequiredPackageFile()". On the client, a cache directory for the packages must be chosen before receiving them is possible: see \ref Network::SetPackageCacheDir "SetPackageCacheDir()".
There are some things to watch out for:
- When a client is assigned to a scene, the client will first remove all existing replicated scene nodes from the scene, to prepare for receiving objects from the server. This means that for example a client's camera should be created into a local node, otherwise it will be removed when connecting.
- After connecting to a server, the client should not create, update or remove non-local nodes or components on its own. However, to create client-side special effects and such, the client can freely manipulate local nodes.
- A node's \ref Node::GetVars "user variables" VariantMap will be automatically replicated on a per-variable basis. This can be useful in transmitting data shared by several components, for example the player's score or health.
- To implement interpolation, exponential smoothing of the nodes' rendering transforms is enabled on the client. It can be controlled by two properties of the Scene, the smoothing constant and the snap threshold. Snap threshold is the distance between network updates which, if exceeded, causes the node to immediately snap to the end position, instead of moving smoothly. See \ref Scene::SetSmoothingConstant "SetSmoothingConstant()" and \ref Scene::SetSnapThreshold "SetSnapThreshold()".
- Position and rotation are Node attributes, while linear and angular velocities are RigidBody attributes. To cut down on the needed network bandwidth the physics components can be created as local on the server: in this case the client will not see them at all, and will only interpolate motion based on the node's transform changes. Replicating the actual physics components allows the client to extrapolate using its own physics simulation, and to also perform collision detection, though always non-authoritatively.
- By default the physics simulation also performs interpolation to enable smooth motion when the rendering framerate is higher than the physics FPS. This should be disabled on the server scene to ensure that the clients do not receive interpolated and therefore possibly non-physical positions and rotations. See \ref PhysicsWorld::SetInterpolation "SetInterpolation()".
- AnimatedModel does not replicate animation by itself. Rather, AnimationController will replicate its command state (such as "fade this animation in, play that animation at 1.5x speed.") To turn off animation replication, create the AnimationController as local. To ensure that also the first animation update will be received correctly, always create the AnimatedModel component first, then the AnimationController.
- Networked attributes can either be in delta update or latest data mode. Delta updates are small incremental changes and must be applied in order, which may cause increased latency if there is a stall in network message delivery eg. due to packet loss. High volume data such as position, rotation and velocities are transmitted as latest data, which does not need ordering, instead this mode simply discards any old data received out of order. Note that node and component creation (when initial attributes need to be sent) and removal can also be considered as delta updates and are therefore applied in order.
- To avoid going through the whole scene when sending network updates, nodes and components explicitly mark themselves for update when necessary. When writing your own replicated C++ components, call \ref Component::MarkNetworkUpdate "MarkNetworkUpdate()" in member functions that modify any networked attribute.
- The server update logic orders replication messages so that parent nodes are created and updated before their children. Remote events are queued and only sent after the replication update to ensure that if they originate from a newly created node, it will already exist on the receiving end. However, it is also possible to specify unordered transmission for a remote event, in which case that guarantee does not hold.
- Nodes have the concept of the \ref Node::SetOwner "owner connection" (for example the player that is controlling a specific game object), which can be set in server code. This property is not replicated to the client. Messages or remote events can be used instead to tell the players what object they control.
- If you want to run the same server logic for both the locally connecting client as well as remote clients, you can use both the server & client functionality in Network subsystem simultaneously. However in this case you need 2 copies of the scene: server and client. Only the client scene should be rendered on the local client, while the server scene is used for simulation only.
\section Network_InterestManagement Interest management
%Scene replication includes a simple, distance-based interest management mechanism for reducing bandwidth use. To use, create the NetworkPriority component to a Node you wish to apply interest management to. The component can be created as local, as it is not important to the clients.
This component has three parameters for controlling the update frequency: \ref NetworkPriority::SetBasePriority "base priority", \ref NetworkPriority::SetDistanceFactor "distance factor", and \ref NetworkPriority::SetMinPriority "minimum priority".
A current priority value is calculated on each server update as "base priority - distance factor * distance." Additionally, it can never go lower than the minimum priority. This value is then added to an update accumulator. Whenever the update accumulator reaches 100.0, the attribute changes to the node and its components are sent, and the accumulator is reset.
The default values are base priority 100.0, distance factor 0.0, and minimum priority 0.0. This means that by default an update is always sent (which is also the case if the node has no NetworkPriority component.) Additionally, there is a rule that the node's owner connection always receives updates at full frequency. This rule can be controlled by calling \ref NetworkPriority::SetAlwaysUpdateOwner "SetAlwaysUpdateOwner()".
Calculating the distance requires the client to tell its current observer position (typically, either the camera's or the player character's world position.) This is accomplished by the client code calling \ref Connection::SetPosition "SetPosition()" on the server connection. The client can also tell its current observer rotation by
calling \ref Connection::SetRotation "SetRotation()" but that will only be useful for custom logic, as it is not used by the NetworkPriority component.
For now, creation and removal of nodes is always sent immediately, without consulting interest management. This is based on the assumption that nodes' motion updates consume the most bandwidth.
\section Network_Controls Client controls update
The Controls structure is used to send controls information from the client to the server, by default also at 30 FPS. This includes held down buttons, which is an application-defined 32-bit bitfield, floating point yaw and pitch, and possible extra data (for example the currently selected weapon) stored within a VariantMap.
It is up to the client code to ensure they are kept up-to-date, by calling \ref Connection::SetControls "SetControls()" on the server connection. The event E_NETWORKUPDDATE will be sent to remind of the impending update, and the event E_NETWORKUPDATESENT will be sent after the update. The controls can then be inspected on the server side by calling \ref Connection::GetControls "GetControls()".
The controls update message also includes a running 8-bit timestamp, see \ref Connection::GetTimeStamp "GetTimeStamp()", and the client's observer position / rotation for interest management. To conserve bandwidth, the position and rotation values are left out if they have never been assigned.
\section Network_ClientPrediction Client-side prediction
Urho3D does not implement built-in client-side prediction for player controllable objects due to the difficulty of rewinding and resimulating a generic physics simulation. However, when not using
physics for player movement, it is possible to intercept the authoritative network updates from the server and build a prediction system on the application level.
By calling \ref Serializable::SetInterceptNetworkUpdate "SetInterceptNetworkUpdate()" the update of an individual networked attribute is redirected to send an event (E_INTERCEPTNETWORKUPDATE) instead of applying the attribute value directly. This should be called on the client for the node or component that is to be predicted. For example to redirect a Node's position update:
\code
node->SetInterceptNetworkUpdate("Network Position", true);
\endcode
The event includes the attribute name, index, new value as a Variant, and the latest 8-bit controls timestamp that the server has seen from the client. Typically, the event handler would store the value that arrived from the server and set an internal "update arrived" flag, which the application logic update code could use later on the same frame, by taking the server-sent value and replaying any user input on top of it. The timestamp value can be used to estimate how many client controls packets have been sent during the roundtrip time, and how much input needs to be replayed.
\section Network_Messages Raw network messages
All network messages have an integer ID. The first ID you can use for custom messages is 153 (lower ID's are either reserved for SLikeNet's or the %Network subsystem's internal use.) Messages can be sent either unreliably or reliably, in-order or unordered. The data payload is simply raw binary data that can be crafted by using for example VectorBuffer.
To send a message to a Connection, use its \ref Connection::SendMessage "SendMessage()" function. On the server, messages can also be broadcast to all client connections by calling the \ref Network::BroadcastMessage "BroadcastMessage()" function.
When a message is received, and it is not an internal protocol message, it will be forwarded as the E_NETWORKMESSAGE event. See the Chat example for details of sending and receiving.
For high performance, consider using unordered messages, because for in-order messages there is only a single channel within the connection, and all previous in-order messages must arrive first before a new one can be processed.
\section Network_RemoteEvents Remote events
A remote event consists of its event type (name hash), a flag that tells whether it is to be sent in-order or unordered, and the event data VariantMap. It can optionally be set to originate from a specific Node in the receiver's scene ("remote node event.")
To send a remote event to a Connection, use its \ref Connection::SendRemoteEvent "SendRemoteEvent()" function. To broadcast remote events to several connections at once (server only), use Network's \ref Network::BroadcastRemoteEvent "BroadcastRemoteEvent()" function.
For safety, allowed remote event types must be registered. See \ref Network::RegisterRemoteEvent "RegisterRemoteEvent()". The registration affects only receiving events; sending whatever event is always allowed. There is a fixed blacklist of event types defined in Source/Urho3D/Network/Network.cpp that pose a security risk and are never allowed to be registered for reception; for example E_CONSOLECOMMAND.
Like with ordinary events, in script remote event types are strings instead of name hashes for convenience.
Remote events will always have the originating connection as a parameter in the event data. Here is how to get it in both C++ and script (in C++, include NetworkEvents.h):
C++:
\code
using namespace RemoteEventData;
Connection* remoteSender = static_cast<Connection*>(eventData[P_CONNECTION].GetPtr());
\endcode
%Script:
\code
Connection@ remoteSender = eventData["Connection"].GetPtr();
\endcode
\section Network_HttpRequests HTTP requests
In addition to UDP messaging, the network subsystem allows to make HTTP requests. Use the \ref Network::MakeHttpRequest "MakeHttpRequest()" function for this. You can specify the URL, the verb to use (default GET if empty), optional headers and optional post data. The HttpRequest object that is returned acts like a Deserializer, and you can read the response data in suitably sized chunks. After the whole response is read, the connection closes. The connection can also be closed early by allowing the request object to expire.
\section Network_Simulation Network conditions simulation
The Network subsystem can optionally add delay to sending packets, as well as simulate packet loss. See \ref Network::SetSimulatedLatency "SetSimulatedLatency()" and \ref Network::SetSimulatedPacketLoss "SetSimulatedPacketLoss()".
\page Database Database
The Database subsystem is built into the Urho3D library only when one of these two \ref Build_Options "build options" are enabled: URHO3D_DATABASE_ODBC and URHO3D_DATABASE_SQLITE. When both options are enabled then URHO3D_DATABASE_ODBC takes precedence. These build options determine which database API the subsystem will use. The ODBC DB API is more suitable for native application, especially the game server, where it allows the app to establish connection to any ODBC compliant databases like SQLite, MySQL/MariaDB, PostgreSQL, Sybase SQL, Oracle, etc. The SQLite DB API, on the other hand, is suitable for mobile application which embeds the SQLite database and its engine into the app itself. The Database subsystem wraps the underlying DB API using a unified URHO3D API, so no or minimal code changes are required to the library user when switching between these two build options.
Currently the implementation just supports immediate SQL statement execution. Prepared statements and transaction management will be added later when the need arises. The subsystem has a simple database connection pooling capability. This internal database connection pool should not be confused with ODBC connection pool option when ODBC DB API is being used. The internal pooling is enabled by default, except when ODBC DB API is being used and when ODBC driver manager 3.0 or later is being detected in the host system, in which case the ODBC connection pool option should be used instead to manage the database connection pooling.
\section Establish_DbConnection Establishing database connection
A new database connection is established by calling \ref Database::Connect "Connect()" and passing it with a so-called database connection string. The database connection string does not only identify which database to connect to, but also other relevant database connection settings like: database user id, user password, host, and port number, etc. The format of the database connection string is controlled by the underlying DB API. In general the connection string is really the only thing that need to be changed when switching the underlying DB API and when switching the databases to connect to.
When the connection is successfully established, a valid DbConnection object is returned. After done with the database connection, an application should disconnect the database connection by calling the \ref Database::Disconnect "Disconnect()". This is a good practise. When the Urho3D game engine exits, the destructor of the database subsystem automatically disconnects all the still active database connections.
\subsection ODBC_ConnString ODBC connection string
The exact format of the ODBC connection string depends on the ODBC driver for a specific database vendor. Consult the accompanying documentation of the specific ODBC driver on how to construct the database connection string. Both DSN and DSN-less connection string are supported. For the purpose of illustration, this subsection will just explain how to connect to a MySQL/MariaDB database as an example. The MySQL/MariaDB database server is assumed to be running on a localhost on a default port with database name called 'test', and user id & password are "testuser" & "testpassword".
\subsubsection DSN_ConnString DSN connection string
Use the GUI tool provided by host system to define the ODBC data source. It can be user DSN or system DSN. On Linux host system, simply add the new data source entry in the '~/.odbc.ini' or '/etc/odbc.ini', respectively. Create the file if it does not exist yet. The data source entry must at least contains the following information.
\verbatim
# These settings assume the host system uses unixODBC as ODBC driver manager
# Settings for iODBC driver manager would be slightly different
[testDSN]
Driver=MariaDB # This is the name of the ODBC driver installed in the /etc/odbcinst.ini
Description=MariaDB test database
Server=localhost
Port=
User=testuser
Password=testpassword
Database=test
Option=3
Socket=
\endverbatim
To connect to this data source, use the following connection string:
\verbatim
DSN=testDSN
\endverbatim
\subsubsection DSN_less_ConnString DSN-less connection string
To connect to the same database above without pre-configuring it as an ODBC data source, use the connection string like this:
\verbatim
Driver=MariaDB;Database=test;User=testuser;Password=testpassword;Option=3
\endverbatim
\subsection SQLite_ConnString SQLite connection string
The SQLite database is a single disk file. The SQLite connection string is simply a path to the location of that single disk file. Both absolute and relative paths work. When the path is valid but the disk file does not exist yet then SQLite database engine will automatically create the disk file and hence create a new empty database in the process. The drawback with this approach is, there is no way to pass additional database connection settings. In order to do that, SQLite DB API also supports connection string using RFC 3986 URI format. Consult SQLite documentation on [URI format](https://www.sqlite.org/uri.html) for more detail. For illustration purposes, this subsection will assume the SQLite disk file is called 'test.db' located in home directory and current working directory is home directory.
\subsubsection Path_ConnString Path connection string
With the above assumption, the following example connection strings work equally.
Relative path:
\verbatim
test.db
\endverbatim
Or absolute path:
\verbatim
/home/testuser/test.db
\endverbatim
\subsubsection URI_ConnString URI connection string
In this format the additional database connection setting can be passed as query parameters. As an example, to connect to the same database as above but in read-only and shared-cache mode then the connection string can be rewritten as:
\verbatim
file:./test.db?mode=ro&cache=shared
\endverbatim
Or:
\verbatim
file:///home/testuser/test.db?mode=ro&cache=shared
\endverbatim
\section Immediate_Execution Immediate SQL statement execution
Use the \ref DbConnection::Execute() "Execute()" to execute an SQL statement in immediate mode. In immediate mode, the SQL statement is opened, prepared, executed, and finalized in one go. It is a convenient method to perform adhoc query but it may not be good for system performance when a same query is being performed repeatedly. The method returns a DbResult object regardless of whether the query type is DML (Data Manipulation Language) or DDL (Data Definition Language). The %DbResult object only contains the resultset when the SQL statement being executed is a select query. Use the \ref DbResult::GetNumColumns() "GetNumColumns()" and \ref DbResult::GetNumRows() "GetNumRows()" to find out the size of the resultset. Use the \ref DbResult::GetColumns() "GetColumns()" and \ref DbResult::GetRows() "GetRows()" to get the actual column headers data and rows data, respectively. For DML statement, use the \ref DbResult::GetNumAffectedRows() "GetNumAffectedRows()" to find out the number of affected rows.
The number of rows in the %DbResult object may be less than the actual number of rows being fetched from the database. This is because the fetched rows could be instructed to be filtered out by \ref DB_Cursor "E_DBCURSOR" event handler. The whole rows fetching process could also be aborted upon request of E_DBCURSOR event handler.
\section Prepare_Bind_Execution SQL execution using prepared statements and dynamic parameter bindings
Not yet supported at this moment.
\section Transaction_Management Transaction Management
Not yet supported at this moment. Currently all statements are auto-committed unless the database is connected as read-only (in which case DML and DDL statements would cause an error to be logged).
\section DB_Cursor Database cursor event
When executing an SQL statement there is an option to enable the sending of database cursor event. This event is sent on each row in the resultset as it is being fetched. i.e. the events could be sent multiple times before the Execute() method returns. %Application can subscribe to this event to influence how the resultset is being populated in the %DbResult object.
The E_DBCURSOR has input and output parameters.
Output parameters:
\verbatim
P_SQL SQL query string currently being executed (String)
P_NUMCOLS Number of columns in the resultset (unsigned)
P_COLHEADERS Column headers in the resultset (StringVector)
P_COLVALUES Row data as it is being fetched (VariantVector)
\endverbatim
%Input parameters:
\verbatim
P_FILTER Set to true to filter out this row from the DbResult object
P_ABORT Set to true to abort further fetching process
\endverbatim
The P_FILTER could be used for any additional client-side filtering logic that is otherwise difficult to be carried out at the database server-side using WHERE or HAVING clause. The P_ABORT when used does not affect rows that are already populated into %DbResult object.
\page Multithreading Multithreading
Urho3D uses a task-based multithreading model. The WorkQueue subsystem can be supplied with tasks described by the WorkItem structure, by calling \ref WorkQueue::AddWorkItem "AddWorkItem()". These will be executed in background worker threads. The function \ref WorkQueue::Complete "Complete()" will complete all currently pending tasks, and execute them also in the main thread to make them finish faster.
On single-core systems no worker threads will be created, and tasks are immediately processed by the main thread instead. In the presence of more cores, a worker thread will be created for each hardware core except one which is reserved for the main thread. Hyperthreaded cores are not included, as creating worker threads also for them leads to unpredictable extra synchronization overhead.
The work items include a function pointer to call, with the signature
\verbatim
void WorkFunction(const WorkItem* item, unsigned threadIndex)
\endverbatim
The thread index ranges from 0 to n, where 0 represents the main thread and n is the number of worker threads created. Its function is to aid in splitting work into per-thread data structures that need no locking. The work item also contains three void pointers: start, end and aux, which can be used to describe a range of sub-work items, and an auxiliary data structure, which may for example be the object that originally queued the work.
Multithreading is so far not exposed to scripts, and is currently used only in a limited manner: to speed up the preparation of rendering views, including lit object and shadow caster queries, occlusion tests and particle system, animation and skinning updates. Raycasts into the Octree are also threaded, but physics raycasts are not. Additionally there are dedicated threads for audio mixing and background loading of resources.
When making your own work functions or threads, observe that the following things are unsafe and will result in undefined behavior and crashes, if done outside the main thread:
- Modifying scene or %UI content
- Modifying GPU resources
- Executing script functions
- Pointing SharedPtr's or WeakPtr's to the same RefCounted object from multiple threads simultaneously
Using the Profiler is treated as a no-op when called from outside the main thread. Trying to send an event or get a resource from the ResourceCache when not in the main thread will cause an error to be logged. %Log messages from other threads are collected and handled in the main thread at the end of the frame.
\page AttributeAnimation Attribute animation
Attribute animation is a mechanism to animate the values of an object's attribute. Objects derived from Animatable can use attribute animation, this includes the Node class and all Component and UIElement subclasses.
These are two ways to use attribute animation. Either the user can create attribute animation with code, and then apply it to object's attribute. Here is a simple code for light color animation:
\code
SharedPtr<ValueAnimation> colorAnimation(new ValueAnimation(context_));
colorAnimation->SetKeyFrame(0.0f, Color::WHITE);
colorAnimation->SetKeyFrame(2.0f, Color::YELLOW);
colorAnimation->SetKeyFrame(4.0f, Color::WHITE);
light->SetAttributeAnimation("Color", colorAnimation, WM_LOOP);
\endcode
In the above code, we first create an ValueAnimation object call colorAnimation, and set three key frame values to it, then assign it to light's color attribute. (Note here: in order to make animation look correct, the last key frame must equal to the first key frame for loop mode).
Another way is to load an attribute animation resource, here is a simple example:
\code
ValueAnimation* colorAnimation = cache->GetResource<ValueAnimation>("Scene/LightColorAnimation.xml");
light->SetAttributeAnimation("Color", colorAnimation, WM_LOOP);
\endcode
Attribute animation supports three different wrap modes:
- WM_LOOP: Loop mode, when the animation arrives to end, it will loop from beginning.
- WM_ONCE: Play once mode, when the animation is finished, it will be removed from the object.
- WM_CLAMP: Clamp mode, when the animation is finished, it will keep the last key frame's value.
The playback speed (default 1 or "original speed") of an animation, as well as the animation's wrap mode can be adjusted on the fly.
The ObjectAnimation class can be used to group together multiple value animations that affect different attributes. For example when the user wants to apply position and color animation for a light, the following code can be used. Note that the object animation is attached to the light's scene node, so a special syntax is needed to refer to the light component's attribute.
\code
// Create light animation
SharedPtr<ObjectAnimation> lightAnimation(new ObjectAnimation(context_));
// Create light position animation
SharedPtr<ValueAnimation> positionAnimation(new ValueAnimation(context_));
// Use spline interpolation method
positionAnimation->SetInterpolationMethod(IM_SPLINE);
// Set spline tension
positionAnimation->SetSplineTension(0.7f);
positionAnimation->SetKeyFrame(0.0f, Vector3(-30.0f, 5.0f, -30.0f));
positionAnimation->SetKeyFrame(1.0f, Vector3( 30.0f, 5.0f, -30.0f));
positionAnimation->SetKeyFrame(2.0f, Vector3( 30.0f, 5.0f, 30.0f));
positionAnimation->SetKeyFrame(3.0f, Vector3(-30.0f, 5.0f, 30.0f));
positionAnimation->SetKeyFrame(4.0f, Vector3(-30.0f, 5.0f, -30.0f));
// Set position animation
lightAnimation->AddAttributeAnimation("Position", positionAnimation);
// Create light color animation
SharedPtr<ValueAnimation> colorAnimation(new ValueAnimation(context_));
colorAnimation->SetKeyFrame(0.0f, Color::WHITE);
colorAnimation->SetKeyFrame(1.0f, Color::RED);
colorAnimation->SetKeyFrame(2.0f, Color::YELLOW);
colorAnimation->SetKeyFrame(3.0f, Color::GREEN);
colorAnimation->SetKeyFrame(4.0f, Color::WHITE);
// Set Light component's color animation
lightAnimation->AddAttributeAnimation("@Light/Color", colorAnimation);
// Apply light animation to light node
lightNode->SetObjectAnimation(lightAnimation);
\endcode
%Object animations can also be loaded from file, like:
\code
ObjectAnimation * lightAnimation = cache->GetResource<ObjectAnimation>("Scene/LightAnimation.xml");
lightNode->SetObjectAnimation (lightAnimation);
\endcode
Attribute animation uses either linear or spline interpolation for floating point types (like float, Vector2, Vector3 etc), and no interpolation for integer and non-numeric types (like int, bool). Alternatively interpolation can be turned off for any data type by setting the interpolation method IM_NONE (see \ref ValueAnimation::SetInterpolationMethod "SetInterpolationMethod()"). This allows e.g. animating %UI elements by modifying the element's image rect to cover a series of animation frames.
\section AttributeAnimation_Classes Attribute animation classes
- Animatable: Base class for animatable objects, which can assign animations on its individual attributes (ValueAnimation), or an animation which affects several attributes (ObjectAnimation).
- ValueAnimation: Includes key frame values for a single attribute
- ObjectAnimation: Includes one or more attribute animations and their wrap modes and speeds for an Animatable object.
- ValueAnimationInfo: Base class for runtime instances of an attribute animation, which includes the referred animation, wrap mode, speed and time position.
\page SplinePath Spline path
\section SplinePath_Intro The SplinePath component
SplinePath is a component that allows to move a node along a path defined from a series of nodes acting as 'control points'. The node to move is called 'controlled node'.
\subsection SplinePath_BuildPath Building the path
A path is built from ordered points. When setting the points from nodes using \ref SplinePath::AddControlPoint "AddControlPoint()", an index is used to order them. At least two nodes are required to build a path.
\subsection SplinePath_RemovePoints Removing points from the path
Points can be removed:
- either individually using \ref SplinePath::RemoveControlPoint "RemoveControlPoint()".
- or globally using \ref SplinePath::ClearControlPoints "ClearControlPoints()".
\subsection SplinePath_ControlNode Assigning the controlled node
The controlled node is assigned using \ref SplinePath::SetControlledNode "SetControlledNode()".
\subsection SplinePath_Moving Moving the controlled node along the path
The controlled node is moved manually according to a time step, using \ref SplinePath::Move "Move()" in your update function.
\subsection SplinePath_BehaviorSettings Behavior controls
The behavior of the node is mainly influenced by its:
- \ref SplinePath::SetSpeed "speed".
- \ref SplinePath::SetInterpolationMode "interpolation mode" used to follow the path. Available modes are BEZIER_CURVE, CATMULL_ROM_CURVE, LINEAR_CURVE and CATMULL_ROM_FULL_CURVE.
\subsection SplinePath_ManualControl Taking manual control of the controlled node
The control node position can be:
- reset to the starting position (first point in the path) using \ref SplinePath::Reset "Reset()".
- set to a given position of the path using \ref SplinePath::SetPosition "SetPosition()". Position is expressed from 0.f to 1.f, where 0 is the start and 1 is the end of the path.
\subsection SplinePath_Queries Querying spline path informations
At any time you can query:
- the \ref SplinePath::GetSpeed "speed".
- the \ref SplinePath::GetLength "path length".
- the \ref SplinePath::GetPosition "parent node's last position on the spline".
- the \ref SplinePath::GetControlledNode "controlled node".
- \ref SplinePath::GetPoint "a point on the spline path" from 0.f to 1.f, where 0 is the start of the path and 1 is the end.
- whether the \ref SplinePath::IsFinished "destination (last point of the path) is reached".
\subsection SplinePath_Debug Debugging
As for any component, a \ref SplinePath::DrawDebugGeometry "debugging function" is supplied to visually check the component.
\subsection SplinePath_SampleCode Sample code
The following sample demonstrates how to build a path from 2 points, assign a controlled node and move it along the path according to speed and interpolation mode.
\code
// Initial point
Node* startNode = scene_->CreateChild("Start");
startNode->SetPosition(Vector3(-20.0f, 0.0f, -20.0f));
// Target point
Node* targetNode = scene_->CreateChild("Target");
targetNode->SetPosition(Vector3(20.0f, 2.0f, 20.0f));
// Node to move along the path ('controlled node')
Node* movingNode = scene_->CreateChild("MovingNode");
// Spline path
Node* pathNode = scene_->CreateChild("PathNode");
SplinePath* path = pathNode->CreateComponent<SplinePath>();
path->AddControlPoint(startNode, 0);
path->AddControlPoint(targetNode, 1);
path->SetInterpolationMode(LINEAR_CURVE);
path->SetSpeed(10.0f);
path->SetControlledNode(movingNode);
\endcode
In your update function, move the controlled node using \ref SplinePath::Move "Move()":
\code
path->Move(eventData[P_TIMESTEP].GetFloat());
\endcode
\page Tools Tools
\section Tools_AssetImporter AssetImporter
Loads various 3D formats supported by Open Asset Import Library (http://assimp.sourceforge.net/) and saves Urho3D model, animation, material and scene files out of them. For the list of supported formats, look at http://assimp.sourceforge.net/main_features_formats.html.
An alternative export path for Blender is to use the Urho3D add-on (https://github.com/reattiva/Urho3D-Blender).
Usage:
\verbatim
AssetImporter <command> <input file> <output file> [options]
Commands:
model Output a model
anim Output animation(s)
scene Output a scene
node Output a node and its children (prefab)
dump Dump scene node structure. No output file is generated
lod Combine several Urho3D models as LOD levels of the output model
Syntax: lod <dist0> <mdl0> <dist1 <mdl1> ... <output file>
Options:
-b Save scene in binary format, default format is XML
-j Save scene in JSON format, default format is XML
-h Generate hard instead of smooth normals if input has no normals
-i Use local ID's for scene nodes
-l Output a material list file for models
-na Do not output animations
-nm Do not output materials
-nt Do not output material textures
-nc Do not use material diffuse color value, instead output white
-nh Do not save full node hierarchy (scene mode only)
-ns Do not create subdirectories for resources
-nz Do not create a zone and a directional light (scene mode only)
-nf Do not fix infacing normals
-ne Do not save empty nodes (scene mode only)
-mb <x> Maximum number of bones per submesh. Default 64
-p <path> Set path for scene resources. Default is output file path
-r <name> Use the named scene node as root node
-f <freq> Animation tick frequency to use if unspecified. Default 4800
-o Optimize redundant submeshes. Loses scene hierarchy and animations
-s <filter> Include non-skinning bones in the model's skeleton. Can be given a
case-insensitive semicolon separated filter list. Bone is included
if its name contains any of the filters. Prefix filter with minus
sign to use as an exclude. For example -s "Bip01;-Dummy;-Helper"
-t Generate tangents
-v Enable verbose Assimp library logging
-eao Interpret material emissive texture as ambient occlusion
-cm Check and do not overwrite if material exists
-ct Check and do not overwrite if texture exists
-ctn Check and do not overwrite if texture has newer timestamp
-am Export all meshes even if identical (scene mode only)
-bp Move bones to bind pose before saving model
-split <start> <end> (animation model only)
Split animation, will only import from start frame to end frame
-np Do not suppress $fbx pivot nodes (FBX files only)
\endverbatim
The material list is a text file, one material per line, saved alongside the Urho3D model. It is used by the scene editor to automatically apply the imported default materials when setting a new model for a StaticModel, StaticModelGroup, AnimatedModel or Skybox component, and can also be manually invoked by calling \ref StaticModel::ApplyMaterialList "ApplyMaterialList()". The list files can safely be deleted if not needed.
In model or scene mode, the AssetImporter utility will also automatically save non-skeletal node animations into the output file directory.
\section Tools_OgreImporter OgreImporter
Loads OGRE .mesh.xml and .skeleton.xml files and saves them as Urho3D .mdl (model) and .ani (animation) files. For other 3D formats and whole scene importing, see AssetImporter instead. However that tool does not handle the OGRE formats as completely as this.
Usage:
\verbatim
OgreImporter <input file> <output file> [options]
Options:
-l Output a material list file
-na Do not output animations
-nm Do not output morphs
-r Output only rotations from animations
-s Split each submesh into own vertex buffer
-t Generate tangents
-mb <x> Maximum number of bones per submesh, default 64
\endverbatim
Note: outputting only bone rotations may help when using an animation in a different model, but if bone position changes have been used for effect, the animation may become less lively. Unpredictable mutilations might result from using an animation in a model not originally intended for, as Urho3D does not specifically attempt to retarget animations.
\section Tools_PackageTool PackageTool
Examines a directory recursively for files and subdirectories and creates a PackageFile. The package file can be added to the ResourceCache and used as if the files were on a (read-only) filesystem. The file data can optionally be compressed using the LZ4 compression library.
Use caution when using package files on Android, as the .apk is already a package itself, where arbitrary seeks can perform poorly due to compression already being used. Experimentally it looks that on Android it can be favorable
to compress the package, because in that case the .apk packaging may skip its own compression, allowing better seek & read performance.
Usage:
\verbatim
PackageTool <directory to process> <package name> [basepath] [options]
Options:
-c Enable package file LZ4 compression
-q Enable quiet mode
Basepath is an optional prefix that will be added to the file entries.
\endverbatim
Alternate output usage:
\verbatim
PackageTool <output option> <package name>
Output option:
-i Output package file information
-l Output file names (including their paths) contained in the package
-L Similar to -l but also output compression ratio (compressed package file only)
\endverbatim
When PackageTool runs, it will go inside the source directory, then look for subdirectories and any files. Paths inside the package will by default be relative to the source directory, but if an extra path prefix is desired, it can be specified by the optional basepath argument.
For example, this would convert all the resource files inside the Urho3D Data directory into a package called Data.pak (execute the command from the bin directory)
\verbatim
PackageTool Data Data.pak
\endverbatim
The -c option enables LZ4 compression on the files. The -q option enables the operation to be performed without sending output to the standard output stream.
\section Tools_RampGenerator RampGenerator
Creates 1D and 2D ramp textures for use in light attenuation and spotlight spot shapes.
Alternatively bakes the image from .ies input file.
Usage:
\verbatim
RampGenerator <output file> <width> <power> [dimensions]
RampGenerator <input file> <output png file> <width> [dimensions]
\endverbatim
The output is saved in PNG format. The power parameter is fed into the pow() function to determine ramp shape; higher value gives more brightness and more abrupt fade at the edge.
\section Tools_SpritePacker SpritePacker
Takes a series of images and packs them into a single texture and creates a sprite sheet xml file.
Usage:
\verbatim
SpritePacker -options <input file> <input file> <output png file>
Options:
-h Shows this help message.
-px Adds x pixels of padding per image to width.
-py Adds y pixels of padding per image to height.
-ox Adds x pixels to the horizontal position per image.
-oy Adds y pixels to the horizontal position per image.
-frameHeight Sets a fixed height for image and centers within frame.
-frameWidth Sets a fixed width for image and centers within frame.
-trim Trims excess transparent space from individual images offsets by frame size.
-xml 'path' Generates an SpriteSheet xml file at path.
-debug Draws allocation boxes on sprite.
\endverbatim
\section Tools_ScriptCompiler ScriptCompiler
Compiles AngelScript file(s) to binary bytecode for faster loading. Can also dump the %Script API in Doxygen format.
Usage:
\verbatim
ScriptCompiler <input file> [resource path for includes]
ScriptCompiler -dumpapi <Doxygen output file> [C header output file]
\endverbatim
The output files are saved with the extension .asc (compiled AngelScript.) binary files are not automatically loaded instead of the text format (.as) script files, instead resource requests and resource references in objects need to point to the compiled files. In a final build of an application it may be convenient to simply replace the text format script files with the compiled scripts.
The script API dump mode can be used to replace the 'ScriptAPI.dox' file in the 'Docs' directory. If the output file name is not provided then the script API would be dumped to standard output (console) instead.
\page Unicode Unicode support
The String class supports UTF-8 encoding. However, by default strings are treated as a sequence of bytes without regard to the encoding. There is a separate
API for operating on Unicode characters, see for example \ref String::LengthUTF8 "LengthUTF8()", \ref String::AtUTF8 "AtUTF8()" and \ref String::SubstringUTF8 "SubstringUTF8()". Urho3D itself needs to be aware of the Unicode characters only in the \ref UI "user interface", when displaying text and manipulating it through user input.
On Windows, wide char strings are used in all calls to the operating system, such as accessing the command line, files, and the window title. The WString class is used as a helper for conversion. On Linux & Mac OS X 8-bit strings are used directly and they are assumed to contain UTF-8.
Note that \ref FileSystem::ScanDir "ScanDir()" function may return filenames in unnormalized Unicode on Mac OS X. Unicode re-normalization is not yet implemented.
\page FileFormats Custom file formats
Urho3D tries to use existing file formats whenever possible, and define custom file formats only when absolutely necessary. Currently used custom file formats are:
\section FileFormats_Model binary model format (.mdl)
\verbatim
Model geometry and vertex morph data
byte[4] Identifier "UMDL" or "UMD2"
uint Number of vertex buffers
For each vertex buffer:
uint Vertex count
uint Legacy vertex element mask (determines vertex size)
uint Morphable vertex range start index
uint Morphable vertex count
byte[] Vertex data (vertex count * vertex size)
In "UMD2" format, the legacy vertex element mask is replaced with the following:
uint Vertex element count
uint[] Descriptions for each vertex element, where
bits 0-7 = element data type, bits 8-15 = semantic, bits 16-23 = semantic index
uint Number of index buffers
For each index buffer:
uint Index count
uint Index size (2 for 16-bit indices, 4 for 32-bit indices)
byte[] Index data (index count * index size)
uint Number of geometries
For each geometry:
uint Number of bone mapping entries
uint[] Bone mapping data, Maps geometry bone indices to global bone indices for HW skinning.
May be empty, in this case identity mapping will be used.
uint Number of LOD levels
For each LOD level:
float LOD distance
uint Primitive type (0 = triangle list, 1 = line list)
uint Vertex buffer index, starting from 0
uint Index buffer index, starting from 0
uint Draw range: index start
uint Draw range: index count
uint Number of vertex morphs (may be 0)
For each vertex morph:
cstring Name of morph
uint Number of affected vertex buffers
For each affected vertex buffer:
uint Vertex buffer index, starting from 0
uint Vertex element mask for morph data. Only positions, normals & tangents are supported.
uint Vertex count
For each vertex:
uint Vertex index
Vector3 Position (if included in the mask)
Vector3 Normal (if included in the mask)
Vector3 Tangent (if included in the mask)
Skeleton data
uint Number of bones (may be 0)
For each bone:
cstring Bone name
uint Parent bone index starting from 0. Same as own bone index for the root bone
Vector3 Initial position
Quaternion Initial rotation
Vector3 Initial scale
float[12] 4x3 offset matrix for skinning
byte Bone collision info bitmask. 1 = bounding sphere 2 = bounding box
If bounding sphere data included:
float Bone radius
If bounding box data included:
Vector3 Bone bounding box minimum
Vector3 Bone bounding box maximum
Bounding box data
Vector3 Model bounding box minimum
Vector3 Model bounding box maximum
Geometry center data
For each geometry:
Vector3 Geometry center
\endverbatim
\section FileFormats_Animation binary animation format (.ani)
\verbatim
byte[4] Identifier "UANI"
cstring Animation name
float Length in seconds
uint Number of tracks
For each track:
cstring Track name (practically same as the bone name that should be driven)
byte Mask of included animation data. 1 = bone positions 2 = bone rotations 4 = bone scaling
uint Number of keyframes
For each keyframe:
float Time position in seconds
Vector3 Position (if included in data)
Quaternion Rotation (if included in data)
Vector3 Scale (if included in data)
\endverbatim
Note: animations are stored using absolute bone transformations. Therefore only lerp-blending between animations is supported; additive pose modification is not.
\section FileFormats_Shader Direct3D9 binary shader format (.vs3, .ps3)
\verbatim
byte[4] Identifier "USHD"
short Shader type (0 = vertex, 1 = pixel)
short Shader model (3)
uint Number of constant parameters
For each constant parameter:
cstring Parameter name
byte Register index
byte Number of registers
uint Number of texture units
For each texture unit:
cstring Texture unit name
byte Sampler index
uint Bytecode size
byte[] Bytecode
\endverbatim
\section FileFormats_Shader4 Direct3D11 binary shader format (.vs4, .ps4)
\verbatim
byte[4] Identifier "USHD"
short Shader type (0 = vertex, 1 = pixel)
short Shader model (4)
uint Vertex element hash code (0 for pixel shaders)
uint Number of constant parameters
For each constant parameter:
cstring Parameter name
byte CBuffer index
uint Start byte offset in CBuffer
uint Byte size
uint Number of texture units
For each texture unit:
cstring Texture unit name
byte Sampler index
uint Bytecode size
byte[] Bytecode
\endverbatim
\section FileFormats_Package Package file (.pak)
\verbatim
byte[4] Identifier "UPAK" or "ULZ4" if compressed
uint Number of file entries
uint Whole package checksum
For each file entry:
cstring Name
uint Start offset
uint Size
uint Checksum
The compressed data for each file is the following, repeated until the file is done:
ushort Uncompressed length of block
ushort Compressed length of block
byte[] Compressed data
\endverbatim
\section FileFormats_Script Compiled AngelScript (.asc)
\verbatim
byte[4] Identifier "ASBC"
byte[] Bytecode, produced by AngelScript serializer
\endverbatim
\page CodingConventions Coding conventions
- Indent style is Allman (BSD) -like, ie. brace on the next line from a control statement, indented on the same level. In switch-case statements the cases are on the same indent level as the switch statement.
- Indents use 4 spaces instead of tabs. Indents on empty lines should not be kept.
- Class and struct names are in camelcase beginning with an uppercase letter. They should be nouns. For example \c %DebugRenderer, \c FreeTypeLibrary, \c %Graphics.
- Functions are likewise in upper-camelcase. For example \c CreateComponent, \c SetLinearRestThreshold.
- Variables are in lower-camelcase. Member variables have an underscore appended. For example \c numContacts, \c randomSeed_.
- Constants and enumerations are in uppercase. For example \c %Vector3::ZERO or \c PASS_SHADOW.
- Pointers and references append the * or & symbol to the type without a space in between. For example \c Drawable* drawable, \c %Serializer& dest.
- The macro \c NULL and 0 should not be used for null pointers, \c nullptr is used instead.
- \c override is used wherever possible.
- \c using is used instead of \c typedef in type declarations.
- `enum class` is not used because of legacy reasons.
- Class definitions proceed in the following order:
- public constructors and the destructor
- public virtual functions
- public non-virtual member functions
- public static functions
- public member variables
- public static variables
- repeat all of the above in order for protected definitions, and finally private
- Header files are commented using one-line comments beginning with /// to mark them for Doxygen.
- Inline functions are defined inside the class definitions where possible, without using the inline keyword.
It's recommended to follow <a href="https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md">C++ Core Guidelines</a> (except items that contradict Urho legacy, e.g. ES.107, Enum.3, Enum.5)
Use this brief checklist to keep code style consistent among contributions and contributors:
- Prefer inplace member initialization to initializer lists.
- Prefer range-based \c for to old style \c for unless index or iterator is used by itself.
- Avoid \c auto unless it increases code readability. More details <a href="https://clang.llvm.org/extra/clang-tidy/checks/modernize-use-auto.html">here</a> and <a href="https://google.github.io/styleguide/cppguide.html#auto">here</a>.
- Use \c auto for verbose, unknown or redundant types. Example:
\code
auto iter = variables.Find(name); // verbose iterator type: HashMap<String, Variant>::Iterator
for (auto& variable : variables) { } // verbose pair type: HashMap<String, Variant>::KeyValue
auto model = MakeShared<Model>(context_); // type is already mentioned in the expression: SharedPtr<Model>
\endcode
- Use \c auto instead of manual type deduction via \c decltype and \c typename.
\page ContributionChecklist Contribution checklist
When contributing code to the Urho3D project, there are a few things to check so that the process goes smoothly.
First of all, the contribution should be wholly your own, so that you hold the copyright. If you are borrowing anything (for example a specific implementation of a math formula), you must be sure that you're allowed to do so, and give credit appropriately. For example borrowing code that is in the public domain would be OK.
Second, you need to agree that code is released under the MIT license with the copyright statement "Copyright (c) 2008-2021 the Urho3D project." Note here that "Urho3D project" is not an actual legal entity, but just shorthand to avoid listing all the contributors. You certainly retain your individual copyright. You should copy-paste the license statement from an existing .cpp or .h file to each new file that you contribute, to avoid having to add it later.
Third, there are requirements for new code that come from Urho3D striving to be a cohesive, easy-to-use package where features like events, serialization and script bindings integrate tightly. Check all that apply:
- For all code (classes, functions) for which it makes sense, both AngelScript and Lua bindings should exist. Refer to the existing bindings and the scripting documentation for specific conventions, for example the use of properties in AngelScript instead of setters / getters where possible, or Lua bindings providing both functions and properties.
- %Script bindings do not need to be made for low-level functionality that only makes sense to be called from C++, such as thread handling or byte-level access to GPU resources, or for internal but public functions that are only accessed by subsystems and are not needed when using the classes.
- Unless impossible due to missing bindings (see above) new examples should be implemented in all of C++, AngelScript and Lua.
- For new Component or UIElement subclasses, \ref Serialization "attributes" should exist for serialization, network replication and editing. The classes should be possible to create from within the editor; check the component category and supply icons for them as necessary (see the files bin/Data/Textures/EditorIcons.png and bin/Data/UI/EditorIcons.xml.)
- If the classes inherit attribute definitions from other classes, make sure that they are registered in the correct order on engine initialization.
- \ref Network "Network replication" of a component's attributes must be triggered manually by calling MarkNetworkUpdate() in each setter function that modifies a network-replicated attribute. See the Camera class for a straightforward example.
- Define \ref Events "events" where you anticipate that an external party may want to hook up to something happening. For example the editor updates its scene hierarchy window by subscribing to the scene change events: child node added/removed, component added/removed.
- Mark all application-usable classes, structs and functions with the macro URHO3D_API so that they are exported correctly in a shared library build.
- Please heed all the \ref CodingConventions "coding conventions" and study existing code where unsure. Ensure that your text editor or IDE is configured to show whitespace, so that you don't accidentally mix spaces and tabs.
- Whenever you are creating a major new feature, its usage should be documented in the .dox pages in addition to the function documentation in the header files. Create a new page if necessary, and link from an appropriate place in the existing documentation. If it's for example a new rendering feature, it could be linked from the \ref Rendering page. If the feature introduces a completely new subsystem, it should be linked from the main page list in Urho3D.dox.
- When you add a new sample application, it should be implemented in all of C++, AngelScript and Lua if possible, to show how all of the APIs are used. Check sample numbering for the next free one available. If a (C++) sample depends on a specific Urho3D build option, for example physics or networking, it can be conditionally disabled in its CMakeLists.txt. See 11_Physics sample for an example.
\section ContributionThirdParty Third party library considerations
- When you add a new third-party library, insert its license statement to the LICENSE in the Source\ThirdParty directory. Only libraries with permissive licenses such as BSD/MIT/zlib are accepted, because complying for example with the LGPL is difficult on mobile platforms, and would leave the application in a legal grey area.
- Prefer small and well-focused libraries for the Urho3D runtime. For example we use stb_image instead of FreeImage to load images, as it's assumed that the application developer can control the data and do offline conversion to supported formats as necessary.
- Third-party libraries should not leak C++ exceptions or use of STL containers into the Urho3D public API. Do a wrapping for example on the subsystem level if necessary.
*/
}