The rtneuron package

The entry point of the RTNeuron Python API is a package called rtneuron. This package contains a collections of submodules with the wrapped C++ classes as well as some free functions helpful for routine tasks such as displaying a target from a blue config file.

Some examples of how to use the package can be found in the Code example and image gallery.

This page is the reference documentation for all the classes and functions provided by the rtneuron package. The documentation is divided in two sections, the first one presents the wrapping of the C++ library and the second one describes classes and functions that are only available in the Python package.

Wrapped C++ classes

In reality, the C++ wrapping is a subpackage called _rtneuron which contains classes and other submodules for the different C++ namespaces. When rtneuron is imported, it brings into its namespace all the contents of _rtneuron and imports all the required submodules. The C++ namespace layout is respected whenever possible in these submodules, as in rtneuron.sceneops for example.

rtneuron namespace

class rtneuron._rtneuron.AttributeMap

An key-value storage with additional capabilities in the native code side.

An attribute map is a table container that stores key-value pairs where the keys are strings and the values are scalars of type bool, int, float, string, wrapped enums and AttributeMap or Python lists of scalars of type bool, int, float, string and wrapped enums.

Other wrapped types can be used as values only if their documentation states so.

The attribute keys are presented as regular attributes in Python. This class defines special ‘__setattr__’ and ‘__getattr__’ methods to handle attribute read/writes and translation of types to/from the native code. Writting to a non existing attribute creates it. Accessing a non existing attribute raises a ValueError exception. Trying to set an attribute with an unsupported type raises a KeyError exception.**Note**
An AttributeMap cannot be nested as part of a list of values.

Examples:

a = AttributeMap()               # Create a new attribute map.
a.x = 10.0                       # Set a new attribute.
print a.x + 3.3                  # Retrieve the attribute value.
a.x = [1, "hi", False]           # Resetting the previous attribute.
a.nested = AttributeMap()        # Nesting an attribute map.
a.nested.x = [1, AttributeMap()] # raises, AttributeMap cannot be in a list.
a.nested.x = dict                # raises, invalid type in assignment.
a.nested.colors = ColorMap()     # OK if ColorMap has been made available
                                 # to AttributeMap in the wrapping.
a.colors = [ColorMap(), 1, "a"]  # If the above works, this will also do.

Native code objects that hold attribute maps can provide extra handles for attribute modification. This implies that trying to set attribute names/values unsupported by a holder can also raise exceptions.

Tab completion inside an IPython shell works by the internal redefinition of the ‘__dir__’ method. The string conversion operator is also defined to print the attributes and their values.

__init__((object)arg1) → None
__init__( (object)arg1, (dict)arg2) -> object :
Create an AttributeMap from a dictionary
attributeChanged

Signal emitted when an attribute has been changed.

The name of the changed attribute is passed as the signal parameter.

help((AttributeMap)arg1) → None :

Print extra documentation of an AttributeMap instance if available

class rtneuron._rtneuron.Camera

A camera represents the frustum and the model part of the model-view transformation used for a single view.

The view part of the transformation will be handled internally by the Equalizer backend as this is required in cluster configurations.

__init__()

Raises an exception This class cannot be instantiated from Python

getProjectionFrustum((Camera)arg1) → tuple :

Gets the frustum definition of a perspective projection.

The near parameters returned is just meant to indicate the field of view the actual parameters used for rendering are adjusted to the scene being displayed.

The results are undefined if the camera is set as orthographic.

Return
A tuple with left, right, top, bottom, near
getProjectionMatrix((Camera)arg1) → object :

Get the camera projection matrix.

Return
An OpenGL ready 4x4 numpy matrix
getProjectionOrtho((Camera)arg1) → tuple :

Gets the camera frustum for orthographic projection.

The results are undefined if the camera is using perspective projection.

Return
left, right, bottom and top
getProjectionPerspective((Camera)arg1) → tuple :

Gets the parameters of a perspective projection.

The results are undefined if the camera is set as orthographic.

Return
A tuple with the vertical field of view and the aspect ratio.
getView((Camera)arg1) → object :

Get the camera position.

Return
A tuple (position, (axis, angle)) where position and axis are [x, y, z] lists and the angle is in degrees.
getViewMatrix((Camera)arg1) → object :

Get the camera modelview matrix.

Return
An OpenGL ready 4x4 numpy matrix
isOrtho((Camera)arg1) → bool :
Return
True if the camera is applying orthographic projection, false otherwise
makeOrtho((Camera)arg1) → None :

Sets the camera to do orthographic projection preserving the current frustum.

makePerspective((Camera)arg1) → None :

Sets the camera to do perspective projection preserving the current frustum.

projectPoint((Camera)arg1, (object)arg2) → object :

Returns the 2D projected coordinates of a 3D point in world coordinates.

The third coordinate just represents if the point is in front of (+1), coincident (0) or behind (-1) the camera. If any coordinate is <-1 or >1 that means that the point is outside the frustum.

setProjectionFrustum((Camera)arg1, (float)left, (float)right, (float)bottom, (float)top[, (float)near=0.1]) → None :

Sets the camera frustum for perspective projection.

Near and far are autoadjusted by the renderer. The near value provided here is used to infer the field of view. No auto aspect ratio conservation is performed.

setProjectionOrtho((Camera)arg1, (float)left, (float)right, (float)bottom, (float)top[, (float)near=0.1]) → None :

Sets the camera frustum for orthographic projections.

setProjectionPerspective((Camera)arg1, (float)verticalFOV) → None :

Change the vertical field of view of the perspective projection.

The aspect ratio is inferred from the current projection matrix.

Parameters

verticalFOV - Angle in degrees.
setView((Camera)arg1, (object)position, (object)orientation) → None :

Sets the camera position

Parameters

position - The world position of the camera

orientation - A tuple ((x, y, z), angle) with a rotation to be applied to the camera (the initial view direction is looking down the negative z axis). The angle is in degrees.

setViewLookAt((Camera)arg1, (object)eye, (object)center, (object)up) → None :

The same as gluLookAt.

This method also sets the home position and pivotal point for manipulators that take it into account.

unprojectPoint((Camera)arg1, (object)arg2, (float)arg3) → object :

Return the 3D world coordinates of a projected point.

Parameters

point - The 2D normalized device coordinates of the point

z - The z value of the 2D point in camera coordinates. Note that for points in front of the camera this value is negative.

viewDirty

Emitted whenever the modelview matrix is modified by the rendering engine.

class rtneuron._rtneuron.CameraManipulator

Base class for all camera manipulators.

Inherits from noncopyable

Subclassed by bbp.rtneuron.CameraPathManipulator, bbp.rtneuron.TrackballManipulator, bbp.rtneuron.VRPNManipulator

__init__()

Raises an exception This class cannot be instantiated from Python

class rtneuron._rtneuron.CameraPath

A sequence of camera keyframes with timestamps.

class KeyFrame

Position, orientation and stereo correction of a given timestamp.

__init__((object)arg1) → None

__init__( (object)arg1, (object)arg2, (object)arg3, (float)arg4) -> object

__init__( (object)arg1, (View)arg2) -> None

orientation

An (x, y, z, w) tuple where x, y, z represents the rotation axis and w the rotation angle in degrees.

position

An (x, y, z) tuple.

stereoCorrection

A scalar multiplicative factor for the interocular distance.

CameraPath.__init__((object)arg1) → None
CameraPath.addKeyFrame((CameraPath)arg1, (float)seconds, (KeyFrame)keyframe) → None :

Adds a new key frame to the path.

If there’s a key frame with that exact timing, it is replaced. Changing the old key frame from an existing reference does not affect the camera path.

addKeyFrame( (CameraPath)arg1, (float)seconds, (View)view) -> None :

Adds a new key frame to the path from the camera and stereo correction of the given view.

If there’s a key frame with that exact timing, it is replaced. Changing the old key frame from an existing reference does not affect the camera path.

CameraPath.clear((CameraPath)arg1) → None :

Clears the camera path.

CameraPath.getKeyFrame((CameraPath)arg1, (int)index) → KeyFrame :

Returns the key frame at a given position.

Throws if the index is out of bounds.

CameraPath.getKeyFrames((CameraPath)arg1) → list :

Return a list of tuples (times, KeyFrame).

If key frames are modified the camera path will be updated.

CameraPath.load((CameraPath)arg1, (str)filename) → None :

Loads a camera path from the given file.

CameraPath.removeKeyFrame((CameraPath)arg1, (int)index) → None :

Removes the keyframe at the given position.

Throws if the index is out of bounds.

CameraPath.replaceKeyFrame((CameraPath)arg1, (int)index, (KeyFrame)frame) → None :

Replaces a keyframe at a given position with a new one.

Throws if the index is out of bounds.

CameraPath.save((CameraPath)arg1, (str)filename) → None :

Writes this camera path to the given file.

CameraPath.setKeyFrames((CameraPath)arg1, (dict)frames) → None :

Replaces the current path by a new one.

Parameters

frames - A dictionary with time in seconds as keys and KeyFrames as values
CameraPath.startTime

The time of the earliest key frame of NaN if the path is empty.

CameraPath.stopTime

The time of the latest key frame of NaN if the path is empty.

class rtneuron._rtneuron.CameraPathManipulator
class LoopMode

The loop mode defines what to do when the end of the camera path is reached.

Values:
  • LOOP_NONE:
Do nothing.
  • LOOP_REPEAT:
Start over the camera path.
  • LOOP_SWING:

Play the camera path in reserve until the start is reached and repeat

NONE = rtneuron._rtneuron.LoopMode.NONE
REPEAT = rtneuron._rtneuron.LoopMode.REPEAT
SWING = rtneuron._rtneuron.LoopMode.SWING
CameraPathManipulator.__init__((object)arg1) → None
CameraPathManipulator.frameDelta

Get: Return the frame delta in milliseconds

Set: Sets the delta time between keyframe samples (in milliseconds)

Use a positive value to set a fixed delta between rendered frames. A value equal to 0 means that the camera path has to be played back in real-time.

CameraPathManipulator.getKeyFrame((CameraPathManipulator)arg1, (float)milliseconds) → tuple :

Get a interpolated keyframe at the given timestamp

Return
A tuple (position, (axis, angle), stereoCorection) where position and axis are [x, y, z] lists and the angle is in degrees.
Version
2.4
CameraPathManipulator.load((CameraPathManipulator)arg1, (str)fileName) → None :

Loads a camera path from a file.

If the camera path contains a single keyframe the loopmode is automatically set to LOOP_NONE.

Throws if an error occurs reading the file.

CameraPathManipulator.loopMode

Get: Set:

CameraPathManipulator.playbackStart

Get: Set: Overrides the start time of the camera path.

Parameters

start - Milliseconds
CameraPathManipulator.playbackStop

Get: Set: Overrides the stop time of the camera path

Parameters

end - Milliseconds
CameraPathManipulator.setPath((CameraPathManipulator)arg1, (CameraPath)arg2) → None
CameraPathManipulator.setPlaybackInterval((CameraPathManipulator)arg1, (float)arg2, (float)arg3) → None :

Overrides the start and stop time of the camera path.

Parameters

start - Milliseconds

end - Milliseconds

class rtneuron._rtneuron.ColorMap
__init__((object)arg1) → None
getColor((ColorMap)arg1, (float)value) → tuple :

Returns the color for the given value.

Return
The color at the given value using linear interpolation of the control points.

Parameters

value - Clamped to current range before sampling the internal texture.
getPoints((ColorMap)arg1) → dict :

The control points of this color map.

getRange((ColorMap)arg1) → tuple :

Return the color map range.

load((ColorMap)arg1, (str)arg2) → None :

Load a color map from the file with the given name.

Throws if an error occurs.

save((ColorMap)arg1, (str)arg2) → None :

Save a color map to a file with the given name.

Throws if an error occurs.

setPoints((ColorMap)arg1, (dict)colorPoints) → None :

Creates the internal look up table using the map of (value, color) points given.

Parameters

colorPoints - The control points dictionary. The keys must be floats and the items are 4-float tuples (RGBA). If any channel is outside the range [0, 1] the underlying color map will be undefined.
setRange((ColorMap)arg1, (float)min, (float)max) → None :

Changes the colormap range adjusting the point values.

The value of the points are ajusted to the new range and the dirty signal is emitted.

textureSize

Set the resolution of the internal texture used for the colormap (measured in texels).

The minimum texture size is bounded to 2 texels.

class rtneuron._rtneuron.ColorScheme

Coloring mode for structural rendering of neurons.

Values:
  • SOLID:
Render the whole neuron with its primary color.
  • RANDOM:
Use a random color for the whole neuron.
  • BY_BRANCH_TYPE:
Render dendrites with the primary color and axons with the secondary color.
  • BY_WIDTH:

Apply a different color to each vertex based on it branch width. The color is interpolated from a color map computed using both the primary and secondary colors.

If simulation display is enabled, the alpha channel of the colormap is used to modulate the final rendering color.
  • BY_DISTANCE_TO_SOMA:

Apply per-vertex colors based on the distance to the soma. The colormap used is derived from the primary and secondary colors by default, unless a by_distance_to_soma colormap is set in the colormaps attribute or the neuron object.

If simulation display is enabled, the alpha channel of the colormap is used to modulate the final rendering color.
  • NUM_COLOR_SCHEMES
BY_BRANCH_TYPE = rtneuron._rtneuron.ColorScheme.BY_BRANCH_TYPE
BY_DISTANCE_TO_SOMA = rtneuron._rtneuron.ColorScheme.BY_DISTANCE_TO_SOMA
BY_WIDTH = rtneuron._rtneuron.ColorScheme.BY_WIDTH
RANDOM = rtneuron._rtneuron.ColorScheme.RANDOM
SOLID = rtneuron._rtneuron.ColorScheme.SOLID
class rtneuron._rtneuron.DataBasePartitioning

Partitioning scheme to be applied to neurons in DB (sort-last) rendering configurations.

Values:
  • NONE * ROUND_ROBIN * SPATIAL
NONE = rtneuron._rtneuron.DataBasePartitioning.NONE
ROUND_ROBIN = rtneuron._rtneuron.DataBasePartitioning.ROUND_ROBIN
SPATIAL = rtneuron._rtneuron.DataBasePartitioning.SPATIAL
class rtneuron._rtneuron.NeuronLOD

Models used for level of detail representation of neurons.

Values:
  • MEMBRANE_MESH * TUBELETS * HIGH_DETAIL_CYLINDERS * LOW_DETAIL_CYLINDERS * DETAILED_SOMA * SPHERICAL_SOMA * NUM_NEURON_LODS
DETAILED_SOMA = rtneuron._rtneuron.NeuronLOD.DETAILED_SOMA
HIGH_DETAIL_CYLINDERS = rtneuron._rtneuron.NeuronLOD.HIGH_DETAIL_CYLINDERS
LOW_DETAIL_CYLINDERS = rtneuron._rtneuron.NeuronLOD.LOW_DETAIL_CYLINDERS
MEMBRANE_MESH = rtneuron._rtneuron.NeuronLOD.MEMBRANE_MESH
SPHERICAL_SOMA = rtneuron._rtneuron.NeuronLOD.SPHERICAL_SOMA
TUBELETS = rtneuron._rtneuron.NeuronLOD.TUBELETS
class rtneuron._rtneuron.PlaybackState

Playback state for the simulation.

State transitions are: PLAYING -> PAUSED if ::SimulationPlayer.pause is called. PLAYING -> FINISHED when one of the simulation window edges is reached.

FINISHED -> PAUSED if ::SimulationPlayer.pause is called. FINISHED -> PLAYING if setSimulationTimestamp is called with a valid timestamp or setSimulationDelta is called.

PAUSED -> PLAYING if ::SimulationPlayer.play is called.

Values:
  • PLAYING:
State change emitted when ::SimulationPlayer.play is called and the previous state was paused or finished
  • PAUSED:
State change emitted when ::SimulationPlayer.pause is called and the previous state was playing
  • FINISHED:

State change emitted when playback reaches one edge of the playback window. The signal is emmited at the moment the timestamp is requested, but the current timestamp may be older. The signal timestampChanged should be used to know exactly the timestamp of the next frame to be displayed.

FINISHED = rtneuron._rtneuron.PlaybackState.FINISHED
PAUSED = rtneuron._rtneuron.PlaybackState.PAUSED
PLAYING = rtneuron._rtneuron.PlaybackState.PLAYING
class rtneuron._rtneuron.Pointer
__init__()

Raises an exception This class cannot be instantiated from Python

class rtneuron._rtneuron.RTNeuron

The main application class.

This class manages the Equalizer configuration and is the factory for other classes that are tied to a configuration (e.g. the scenes).

Inherits from std.enable_shared_from_this< RTNeuron >

__init__((object)arg1[, (list)argv=[][, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]]) → None :
Version
2.4 required for the profile attribute.

Parameters

argv - The command line argument list.

attributes - Global application options, including:

afferent_syn_color (floatx3[+1]): Default color to use for afferent synapse glyphs

autoadjust_simulation_window (bool) Whether the simulation player window should be adjusted automatically. Simulation window adjustment occurs when:

::SimulationPlayer.setTimestamp is called

::SimulationPlayer.play is called

A new simulation timestamp has been mapped and is ready for displaying. Auto-adjustment will not try to obtain the latest simulation data if it can lead to a deadlock (e.g. when the engine is already trying to do it for a previously requested timestamp).

efferent_syn_color (floatx3[+1]): Default color to use for efferent synapse glyphs

has_gui (bool): True to indicate the RTNeuron object that it’s running inside a QT application.

neuron_color (floatx3[+1]): Default color for neurons

soma_radii (AttributeMap): An attribute map indexed with morphology type names as attribute names and radii as attribute values.

soma_radius (float): Default soma radius to use if no additional information is available.

profile (AttributeMap): An attribute map with options for profiling:

enable (bool): Enable frame time profiling

logfile (string): Log file name to write frame times.

compositing (bool): False to disable frame compositing, True or non present otherwise.

view (AttributeMap): An attribute map with the default view parameters (e.g. background, lod_bias, ...).

window_width (int): Width of the application window in pixels.

window_height (int): Height of the application window in pixels.

allViews

Returns the vector of active or inactive views which belong to any layout.

attributes

Returns a modifiable attribute map.

These attributes can be modified at runtime.

createConfig((RTNeuron)arg1[, (str)configFile='']) → None :

Deprecated, use init instead.

createScene((RTNeuron)arg1[, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]) → Scene :

Creates a Scene to be used in this application.

The attribute map includes scene attributes that are passed to the scene constructor. See here for details.

Currently, all scenes to be used inside a config must be created in all nodes before init is called.

The application does not hold any reference to the returned scene. If the caller gets rid of the returned reference and no view holds the scene, the scene will be deallocated.

Do not call from outside the main thread.

See
Scene.Scene
eventProcessorUpdated
exit((RTNeuron)arg1) → None :

Stops all rendering and cleans up all the resource from the current Equalizer configuration.

The exited signal will be emitted when the config is considered to be done. Rendering is not stopped yet at that point.

Do not call from outside the main thread.

exitConfig((RTNeuron)arg1) → None :

Deprecated, use exit instead.

exited

Emitted while the config is done in Config.setDone.

Frame rendering is not finished at the point. The signal indicates that the lifetime of objects that might be referenced outside of the library (e.g. in python) is about to end, so object destructions can be scheduled accordingly.

frame((RTNeuron)arg1) → None :

Trigger a frame.

If the rendering is paused, triggers the rendering of exactly one frame.

If the rendering loop is running, triggers a redraw request if the rendering loop was waiting for this event.

Do not call from outside the main thread in the application node.

frameIssued

Emitted after the internal rendering loop has finished issuing a frame.

The frame is not necessarily finished at this point, but all the distributed objects are guaranteed to have been committed. This can be used for animations.

getActiveViewEventProcessor((RTNeuron)arg1) → object
idle

Emitted when the application is idle, e.g. waiting for (user)events.

init((RTNeuron)arg1[, (str)configFile='']) → None :

Creates the view (windows, rendering threads, event processing...) Throws if there’s a view already created.

In parallel rendering configurations this function needs to call eq.client.initLocal to launch the rendering clients. Instead of blocking forever inside this function, a new thread will be created for the client loop.

This function blocks until the application (or rendering client) loop is guaranteed to have been started. The rendering loop starts paused ::RTNeuron.resume must be called to start the rendering.

Do not call from outside the main thread.

Parameters

config - Path to an Equalizer configuration file or hwsd session name.
pause((RTNeuron)arg1) → None :

Block rendering loop after the current frame finishes.

Do not call from outside the main thread in the application node.

player

Returns the interface object to simulation playback.

A single player by default. May become an external object and shared by different views.

record((RTNeuron)arg1, (RecordingParams)arg2) → None :

High level function to dump the rendered frames to files.

When called, the rendering loop is resumed if paused.

If a camera path is given, a new camera path manipulator is created and assigned to all active views. Any previous camera manipulator will be overriden.

Recording is stopped automatically when: The camera path end is reached.

The end of the simulation window is reached.

The maximum number of frames to render is reached. whichever occurs first.

This function does not wait until all frames are finished (you can set a callback to frameIssued to count frames or use waitRecord). If the simulation delta is set to 0, no simulation playback will be performed (in that case, the current simulation window remains unmodified).

If the simulation window is invalid (start >= stop) it will not be considered to stop the recording.

Idle anti-alias appears disabled when using this function.

Changing simulation playback parameters in the player API will interfere with the results of this function.

Do not call from outside the main thread.

resume((RTNeuron)arg1) → None :

Resume the rendering loop.

Do not call from outside the main thread in the application node.

setShareContext((RTNeuron)arg1, (object)arg2) → None
textureUpdated

Emitted after the texture that captures the appNode rendering was updated.

This signal is tied to the existence of the GUI widget which takes this notification to render the UI and the texture.

useLayout((RTNeuron)arg1, (str)arg2) → None :

Change the Equalizer layout.

Throws if the layout does not exist or if there is no configuration initialized. Do not call outside the main thread.

versionString = 'RTNeuron 3.0.0 (c) 2006-2016 Universidad Politécnica de Madrid, Blue Brain Project'
views

Returns the vector of active views.

wait((RTNeuron)arg1) → None :

Wait for the Equalizer application loop to exit.

This function returns inmediately if not config is active. Otherwise it blocks until some event makes the application loop to exit.

While a thread is blocked in this function, calls to init. or waitRecord will block.

waitFrame((RTNeuron)arg1) → None :

Wait for a new frame to be finished.

Returns inmediately if no config is active. Exiting the config will also unlock the caller. Do not call from outside the main thread in the application node.

waitFrames((RTNeuron)arg1, (int)arg2) → None :

Wait for at least n frames to be finished.

This function resumes the rendering loop. Returns inmediately if no config is active. Exiting the config will also unlock the caller. More frames may be generated before the function returns. Do not call from outside the main thread in the application node.

waitRecord((RTNeuron)arg1) → None :

Wait for the last frame of a movie recording to be issued.

Do not call outside the main thread.

class rtneuron._rtneuron.RecordingParams

Parameters to configure the generation of movies by ::RTNeuron.record

__init__((object)arg1) → None
cameraPath

The camera path to be used during rendering. If not assigned each view will keep its own camera manipulator.

cameraPathDelta

Delta time in milliseconds which the camera path is advanced each frame. If 0 real time will be used.

fileFormat

Extension (without dot) of the file format to use. File formats supported are those for which an OSG plugin is avaiable.

filePrefix

Prefix to append to the output files.

frameCount

If different from 0, sets the number of frames to render before stop recording.

simulationDelta

Delta time milliseconds in which the simulation is advanced each frame.

simulationEnd

End timestamp in milliseconds.

simulationStart

Start timestamp in milliseconds.

stopAtCameraPathEnd

Set to true to stop recording at the end of the camera path if cameraPathDelta is a positive number. The camera path time interval is considered open at the right.

class rtneuron._rtneuron.RepresentationMode

Representation mode for neurons

Values:
  • SOMA * SEGMENT_SKELETON * WHOLE_NEURON * NO_AXON * NO_DISPLAY * NUM_REPRESENTATION_MODES
NO_AXON = rtneuron._rtneuron.RepresentationMode.NO_AXON
NO_DISPLAY = rtneuron._rtneuron.RepresentationMode.NO_DISPLAY
SEGMENT_SKELETON = rtneuron._rtneuron.RepresentationMode.SEGMENT_SKELETON
SOMA = rtneuron._rtneuron.RepresentationMode.SOMA
WHOLE_NEURON = rtneuron._rtneuron.RepresentationMode.WHOLE_NEURON
class rtneuron._rtneuron.Scene

The scene to be rendered by one or more views.

A scene contains the circuit elements to be displayed, additional mesh models and is associated with simulation data (this may be moved to the View class in the future).

The attributes that can be passed to ::RTNeuron.createScene are the following:

accurate_headlight (bool): Apply shading assuming directional light rays from the camera position or parallel to the projection plane.

alpha_blending (AttributeMap): If provided, transparent rendering will be enabled in this scene. The attributes to configure the alpha-blending algorithm are:

mode (string): depth_peeling, multilayer_depth_peeling or fragment_linked_list if compiled with OpenGL 3 support.

max_passes (string): Maximum number of rendering passes for multipass algorithms.

cutoff_samples (string): In multipass algorithms, the number of samples returned by the occlusion query at which the frame can be considered finished.

slices (int) [only for multi-layer depth peeling]: Number of slices to use in the per-pixel depth partition of the scene. If the input attribute map is empty, transparent rendering will be disabled.

circuit (string): URI of the circuit to use for this scene.

mesh_path (string): Path where neuron meshes are located for the given circuit. This path will be try to be inferred for circuits described by Circuit/BlueConfig.

connect_first_order_branches (bool): Translate the start point of first order branches to connect them to the soma (detailed or spherical depending on the case).

em_shading (bool): Choose between regular phong or fake electron microscopy shading.

inflatable_neurons (bool): If true the neuron models can be inflated by displacing the membrame surface in the normal direction. The inflation factor is specified as a view attribute called inflation_factor.

load_morphologyes (bool): Whether load morphologies for calculating soma radii or not.

lod (AttributeMap): Level of detail options for different types of objects
neurons (AttributeMap): Each attribute is a pair of floats between [0, 1] indicating the relative range in which a particular level of detail is used. Attribute names refer to levels of detail and they can be: mesh, high_detail_cylinders*, low_detail_cylinders, tubelets, detailed_soma, spherical_soma*

mesh_based_partition (bool): Use the meshes for load balancing spatial partitions. Otherwise only the morpholgies are used. This options requires use_meshes to be also true.

partitioning (DataBasePartitioning): The type of decomposition to use for DB (sort-last) partitions.

preload_skeletons (bool): Preload all the capsule skeletons used for view frustum culling into the GPU instead of doing the first type they are visible.

unique_morphologies (bool) If true, enables optimizations in spatial partitions that are only possible assuming that morphologies are unique.

use_cuda (bool): Enable CUDA based view frustums culling.

use_meshes (bool): Whether triangular meshes should be used for neurons or not.

Scenes must be created using ::RTNeuron.createScene before the Equalizer configuration is started. At creation time scenes are assigned an internal ID. In multi-procress configurations (be it in the same machine or not), the scenes to used must be created in the same order to ensure the consistency of the frames.

At this moment, scene changes are not propagated from the application process to the rendering clients.

class Object
__init__()

Raises an exception This class cannot be instantiated from Python

apply((Object)arg1, (ObjectOperation)operation) → None :

Applies an operation to this scene object.

The operation is applied immediately, there is no need to call ::update(). The operation is distributed to all the nodes participating in the Equalizer configuration.

Exceptions

std.runtime_error - if the operation does not accept this object as input.
attributes
object

Return a copy of the object passed to the add method that returned this handler.

These objects will be: A read-only numpy array of u4 for neurons, with the neuron GIDs.

A brain.Synapses container for add{A,E}fferentNeurons

A string with the model name for addModel

None for addGeometry

query((Object)arg1, (object)ids[, (bool)check_ids=False]) → Object :

Return a handler to a subset of the entities from this object.

The function shall throw if any of the ids does not identify any entity handled by this object.

Attribute changes on the subset handler will affect only the entities selected. Attribute changes on the parent handler will still affect all the entities. The child handler attributes are also updated when the parent attributes are modified. However, attribute updates on a child handler are not propagated to the attributes of any other children (regardless of having overlapping subsets). Nevertheless, when calling ::update() changes in the attributes are always made effective when needed.

The lifetime of the returned object is independent of the source one, but the returned object will be invalidated (operations will throw) when the source one is deallocated or invalidated. If this function is called recursively, the new returned objects will depend on the parent of the called object.

Subset handlers are not part of the objects returned by ::Scene.getObjects().

Beware that attribute updates on subhandlers may be discarded if the parent object has not been fully integrated in the scene yet (i.e., no frame including it has been rendered).

This method may not be implemented by all objects.

update((Object)arg1) → None :

Issues and update operation on the scene object managed by this handler.

Attributes are copied internally so it’s safe to modify the attributes after an update, however those changes won’t take effect until update is called again.

class Scene.ObjectOperation
__init__()

Raises an exception This class cannot be instantiated from Python

Scene.__init__()

Raises an exception This class cannot be instantiated from Python

Scene.addAfferentSynapses((Scene)arg1, (Synapses)synapses[, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]) → Object :

Adds a set of synapse glyphs at their post-synaptic locations to the scene.

Thread safe with regard to the rendering loop.

Return
An object handler to the synapse set added.

Parameters

synapses - The synapse container.

attributes - Synapse display attributes:

radius (float)

color (floatx4)

surface (bool): If true, the synapses are placed on the surfaces of the geometry, or in the center otherwise

Scene.addEfferentSynapses((Scene)arg1, (Synapses)synapses[, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]) → Object :

Adds a set of synapse glyphs at their pre-synaptic locations to the scene.

Exactly the same as the function addAfferentSynapses but for efferent synapses.

Return
An object handler to the synapse set added.
Scene.addGeometry((Scene)arg1, (object)vertices[, (object)primitive=None[, (object)colors=None[, (object)normals=None[, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]]]]) → Object :

Adds geometry described as vertices and faces to the scene.

Return
An object handler to the model added.

Parameters

vertices - A Nx4 numpy array of floats or a list of 4-element lists for adding points with size/radii to the scene. A Nx3 array or 3-element lists for all other cases (points with a single radius

primitive - If None, the vertices will be added as points to the scene. For adding lines or triangles this parameter must be a MxI numpy array of integeres or N list of I-element lists, where I is 2 for lines and 3 for triangles.

colors - An Nx4 optional numpy array or N lists of 4-element iterables for per vertex colors, or a single 4-element iterable for a global color. If not provided a default color will be used.

normals - An optional Nx3 numpy array or a list of 3-element iterables with per vertex normals. Not used for points and spheres.

attributes - Optional attributes concerning shading details

flat (bool): If true, the normal array is ignored and flat shading is used instead. If false and no normal array is provided, vertex normals are computed on the fly. Flat shading is only meaningful for triangle meshes.

line_width (float): Line width. Only for line primitives.

point_size (float): For points without individual size/radius this is the overall size. Its interpretation depends on the point style. For spheres, it’s the radius. For points or circles, this is the screen size in pixels. If not specified, it will default to 1.

point_style (string): Use “spheres” to add real 3D spheres to the scene, “points” to add round points sprites and “circles” to add circles with 1 pixel of line width (the last two use regular GL_POINTS style). The default style if not specified is points.

Scene.addModel((Scene)arg1, (str)model, (object)transformation[, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]) → Object :

Loads a 3D model from a file and adds it to the scene.

Return
An object handler to the model added.
Warning
Additional models are not divided up in DB decompositions.

Parameters

filename - Model file to load.

transform - An affine transformation to apply to the model.

attributes - Model attributes:

color (floatx4) The diffuse color to be applied to the model to all parts that don’t specify any material already.

flat (bool): If true and the model doesn’t include it’s own shaders, a shader to render facets with flat shading will be applied. If false, it has no effect.

addModel( (Scene)arg1, (str)model [, (str)transformation=’’ [, (AttributeMap)attributes=<rtneuron._rtneuron.AttributeMap object at 0x7fa15413aad8>]]) -> Object :

Convenience overload of the function above.

Return
An object handler to the model added.
Warning
Additional models are not divided up in DB decompositions

Parameters

filename - Model file to load.

transform - A sequence of affine transformations. The sequence is specified as a colon separated string of 3 possible transformations:

rotations “r@x,y,z,angle”

scalings “s@x,y,z”

translations “t@x,y,z”

attributes - See function above.

Scene.addNeurons((Scene)arg1, (object)neurons[, (AttributeMap)attributes=rtneuron._rtneuron.AttributeMap()]) → Object :

Add a set of neurons to the scene.

This is an asynchronous operation. The neuron container as well as the attribute map are copied internally so it is safe to modify them afterwards. Thread safe with regard to the rendering loop.

Return
An object handler to the neuron set added.

Parameters

gids - The GIDs of the neurons to add.

attributes - Neuron display attributes:

mode (RepresentationMode): How to display neurons. Neurons added with SOMA or NO_AXON modes cannot be switched to WHOLE_NEURON later on.

color_scheme (ColorScheme): Coloring method to use. SOLID_COLOR by default is not provided

color (floatx4): RGBA tuple to be used as base color for SOLID_COLOR and BY_WIDTH_COLORS color schemes.

colormaps (AttributeMap): Optional submap with target specific color maps. These color maps override the color maps from the view. The supported color maps are:

by_distance_to_soma: The color map to use for the BY_DISTANCE_TO_SOMA coloring scheme.

by_width: The color map to use for the BY_WIDTH coloring scheme.

compartments: The color map to use for compartmental simulation data.

spikes: The color map to use for spike rendering. The range of this color map must be always [0, 1], otherwise the rendering results are undefined.

primary_color (floatx4): An alias of the above.

secondary_color (floatx4): RGBA tuple to be used as secondary color for BY_WIDTH_COLORS.

max_visible_branch_order (int): Changes the maximum branching order of visible sections. Use -1 to make all braches visible and 0 to make only the soma visible.

Scene.attributes

The runtime configurable attribute map.

The modifiable attributes are: alpha_blending (AttributeMap): The attribute map with options for transparency algorithms. See Scene class documentation

em_shading (bool): See Scene class documentation

auto_update (bool): Whether scene modifications automatically trigger a dirty signal or not.

inflatable_neurons (bool): Enable/disable neuron membrame inflation along the surface normal. The inflation factor is specified as a view attribute called inflation_factor.

Scene.cellSelected

Signal emitted when a cell is selected by pick.

Scene.cellSetSelected

Signal emitted when a group of cells is selected by pick.

Scene.circuit

Get: Get the current brain.Circuit to be used for this scene

Set: Set the brain.Circuit to be used for this scene. Throws if the scene already contains neurons or synapses.

Scene.circuitBoundingSphere
Return
A tuple with the scene center and radius.
Scene.clear((Scene)arg1) → None :

Removes all the objects from the scene.

Clipping planes are removed.

To be called only from the application node.

Scene.clearClipPlanes((Scene)arg1) → None
Scene.clearSimulation((Scene)arg1) → None :

Clear any simulation report from the scene.

Scene.getClipPlane((Scene)arg1, (int)index) → list :

Queries the clip plane with a given index.

Exceptions

runtime_error - if no plane has been assigned in that index.
Scene.highlight((Scene)arg1, (object)arg2, (bool)arg3) → None :

Toggle highlighing of a cell set

To be called only from the application node. Parameters

gids - The set of cells to toggle.

on - True to highlight the cell, false otherwise.

Scene.highlightedNeurons

Return the gids of the highlighted neurons.

Return
A numpy array of u4 copied from the internal list.
Scene.neuronSelectionMask

Gets the set of unselectable cells.

This mask affects the results of ::pick() functions.

Return
A numpy array of u4 with the unslectable neurons.
Scene.objects

Returns the handlers to all objects added to the scene.

Return
A list of object handlers.
Scene.pick((Scene)arg1, (object)origin, (object)direction) → None :

Intersection test between the pointer ray and the scene elements.

May emit cellSelected or synapseSelected signals if a scene object was hit.

A signal is used to communicate the result to allow decoupling the GUI event handling code from selection action callbacks.

Parameters

origin - world space origin of the pick ray

direction - pick ray direction, does not need to be normalized

pick( (Scene)arg1, (View)view, (float)left, (float)right, (float)bottom, (float)top) -> None :

Intersection test of the space region selected by a rectangular area projected using the camera from the given view.

The implementation distinguishes between perspective and orthographic projections. Will emit a cellSetSelected signal with the group of somas intersected by the projection of the rectangle (both towards infinite and the camera position).

A signal is used to communicate the result to allow decoupling the GUI event hanlding code from selection action callbacks.

Parameters

view -

left - Normalized position (in [0,1]) of the left side of the rectangle relative to the camera projection frustum/prism.

right - Normalized position (in [0,1]) of the right side of the rectangle relative to the camera projection frustum/prism.

bottom - Normalized position (in [0,1]) of the bottom side of the rectangle relative to the camera projection frustum/prism.

top - Normalized position (in [0,1]) of the top side of the rectangle relative to the camera projection frustum/prism.

Scene.progress

Emitted as scene loading/creation advances.

Scene.remove((Scene)arg1, (Object)arg2) → None :

Removes a target/model from the scene given its handler.

Scene.setClipPlane((Scene)arg1, (int)index, (object)plane) → None :

Adds or modifies a clipping plane.

Clipping planes are only applied to subscenes that have no spatial decomposition, otherwise they are silently ignored.

Parameters

index - Number of plane to be set. The maximum number of clipping planes is 8.

plane - Plane equation of the clipping plane.

Exceptions

runtime_error - if index is >= 8.
Scene.setSimulation((Scene)arg1, (CompartmentReport)arg2) → None :
Thread safe with regard to the rendering loop.

setSimulation( (Scene)arg1, (SpikeReportReader)arg2) -> None :

Thread safe with regard to the rendering loop.
Scene.somasBoundingSphere
Return
The center and radius around the somas of the scene.
Scene.synapseSelected

Signal emitted when a synapse is selected by pick. Do not store the synapse argument passed to the callback.

Scene.synapsesBoundingSphere
Return
The center and radius around the synapses of the scene.
Scene.update((Scene)arg1) → None :

To use when auto_update is false to trigger the scene update.

If the auto_update attribute is false, adding/removing objects from the scene or changing attributes that modify the rendering style will not trigger a new frame and consequent scene update. This function can be used to trigger it manually.

class rtneuron._rtneuron.SimulationPlayer

Interface to simulation playback control.

The simulation timestamp if part of the frame data, this implies that all views are rendered with the same timestamp.

FINISHED = rtneuron._rtneuron.PlaybackState.FINISHED
PAUSED = rtneuron._rtneuron.PlaybackState.PAUSED
PLAYING = rtneuron._rtneuron.PlaybackState.PLAYING
__init__()

Raises an exception This class cannot be instantiated from Python

adjustWindow((SimulationPlayer)arg1) → None :

Adjusts the simulation playback window to the reports of the active scenes.

The begin time will be equal to the minimum of the start time of all reports and the end time will be equal to the maximum of end time of all reports.

For stream based reports, this function will try to update the end timestamp if the playback state is paused.

The timestamp will be clamped to the new window and a new frame will be triggered if necessary.

Exceptions

runtime_error - if there’s no active scene with a report attached.
beginTime
endTime
finished
pause((SimulationPlayer)arg1) → None :

Pause simulation playback.

play((SimulationPlayer)arg1) → None :

Start simulation playback from the current timestamp.

playbackStateChanged

Signal emitted when simulation playback state is changed.

simulationDelta

The timestep between simulation frames to be used at playback.

simulationDeltaChanged

Signal emitted when simulation delta is changed.

timestamp

Get: The timestamp being displayed currently or NaN if undefined.

Set: Sets the next timestamp to display and triggers rendering.

It may throw when trying to move the timestamp beyond the end of a stream-based report.

timestampChanged

Signal emitted whenever a new frame with a new timestamp has finished.

window

A (double, double) tuple with the simulation playback time window.

If written, the timestamp to display is clamped to the new window, a new frame is triggered if necessary and simulation window auto-adjustment is turned off (to turn it on again set RTNeuron.attributes.auto_adjust_simulation_window to True).

windowChanged

Signal emitted when simulation window is changed.

The signal is emitted be either calls to setWindow or by simulation window auto-adjustment (see ::RTNeuron.RTNeuron() for details).

class rtneuron._rtneuron.TrackballManipulator

Default mouse-based manipulator from OpenSceneGraph.

Use the right button to zoom, middle button to pan and left button to rotate.

Inherits from bbp.rtneuron.CameraManipulator

__init__((object)arg1) → None
getHomePosition((TrackballManipulator)arg1) → tuple :
Return
A tuple (eye, center, up) where each one is an [x, y, z] vector
Version
2.3
setHomePosition((TrackballManipulator)arg1, (object)eye, (object)center, (object)up) → None :
Version
2.3

Parameters

eye - The reference camera position

center - The reference pivot point for rotations

up - The direction of the up direction in the reference orientation. The “look at” vector is (center - eye).

class rtneuron._rtneuron.View

This class represents a view on a scene.

A view holds together a scene, a camera and the visual attributes that are not bound to a scene, e.g. level of detail bias, simulation color map, stereo correction. A view can also have a camera manipulator and a selection pointer. Cameras are view specific and cannot be shared.

There is a one to one mapping between RTNeuron views and Equalizer views.

Note
Currently the simulation report is bound to the scene but this will be moved to the view in the future. The same applies to enabling/disabling alpha blending at runtime.
__init__()

Raises an exception This class cannot be instantiated from Python

attributes

Attribute map with runtime configurable attributes for a View.

Existing attributes are: General: background (floatx4): Background color. The alpha channel of the background is considered by frame grabbing functions. If alpha equals to 1, the output images will have no alpha channel.

use_roi (float): Compute and use regions of interest for frame readback in parallel rendering configurations.
Appearance:

clod_threshold (float): When using continuous LOD, the unbiased distance at which the transition from pseudocylinders to tublets occurs for branches of radius 1. This value is modulated by the lod_bias. During rendering, the distance of a segment is divided by its radius before comparing it to the clod_threshold.

colormaps (AttributeMap): A map of ColorMap objects. The currently supported color maps are:

compartments: The color map to use for compartmental simulation data.

spikes: The color map to use for spike rendering. This range of this color map must be always [0, 1], otherwise the rendering results are undefined.

display_simulation (bool): Show/hide simulation data.

idle_AA_steps (int): Number of frames to accumulate in idle anti-aliasing

highlight_color (floatx4): The color applied to make highlighted neurons stand out. The highlight color replaces the base color when display_simulation is disabled. When display_simulation is enabled, the highlight color is added to the color obtained from simulation data mapping.

inflation_factor (float): Sets the offset in microns by which neuron membrane surfaces will be displaced along their normal direction. This parameter has effect only on those scenes whose inflatable_neurons attribute is set to true.

lod_bias (float): A number between 0 and 1 that specifies the bias in LOD selection. 0 goes for the lowest LOD and 1 for the highest.

probe_color (floatx4): The color to apply to those parts of a neuron whose simulation value is above the threshold if simulation display is enabled.

probe_threshold (float): The simulation value above which the probe color will be applied to neuron surfaces if simulation display is enabled.

spike_tail (float): Time in millisecond during which the visual representation of spikes will be still visible.

Frame capture

snapshot_at_idle (bool): If true, take snapshots only when the rendering thread becomes idle (e.g. antialias accumulation done). Otherwise, the snapshot is taken at the very next frame.

output_file_prefix (string): Prefix for file written during recording.

output_file_format (string): File format extension (without dot) to use during frame recording. Supported extensions are those for which OSG can find a pluging.

Cameras and stereo

auto_compute_home_position (bool): If true, the camera manipulator home position is recomputed automatically when the scene object is changed or when the scene emits its dirty signal.

auto_adjust_model_scale (bool): If true, every time the scene is changed the ratio between world and model scales is adjusted.

depth_of_field (AttributeMap): Attributes to enable and configure depth of field effect.

enabled (bool)

focal_distance (float): Distance to the camera in world units at which objects are in focus

focal_range (float): Distance from the focal point within objects remain in focus.

model_scale (bool) : Size hint used by Equalizer to setup orthographic projections and stereo projections. Set to 1 in order to use world coordinates in orthographic camera frustums.

stereo_correction (float): Multiplier of the scene size in relation to the observer for stereo adjustment.

stereo (bool) : Enables/disables stereoscopic rendering.

zero_parallax_distance (float): In stereo rendering, the distance from the camera in meters at which left and right eye projections converge into the same image (only meaningful for fixed position screens). All valid attributes are initialized to their default values.

camera

Get only:

cameraManipulator

Get: Set: Sets the manipulator that controls de camera.

The camera manipulator will receive input events from this view and process them into a model matrix. At construction, a trackball manipulator is created by default.

computeHomePosition((View)arg1) → None :

Compute the home position for the current scene and set it to the camera manipulator.

The camera position is also reset to new home position.

pointer

Get: Set:

record((View)arg1, (bool)enable) → None :

Enable or disable frame grabbing.

See
File naming attributes from View.attributes

Parameters

enable - If true, rendered images will be written to files starting from next frame on.
scene

Get: Set: Sets the scene to be displayed.

snapshot((View)arg1, (str)fileName[, (bool)waitForCompletion=True]) → None :

Triggers a frame and writes the rendered image to a file.

This method waits until the image has been written unless waitForCompletion is false, in which case it returns inmediately.

When idle AA is enabled and the snapshot_at_idle attribute is set, the snapshot is taken when frame accumulation is finished.

Throws if fileName is empty.

Parameters

fileName - Filename including extension. If the filename include the squence “%c” all destination channels will be captured, replacing “%c” with the channel name in the output file. Notice that this option is meaningless for the offscreen snapshot functions.

waitForCompletion - if true, locks until the image has been written to a file.

snapshot( (View)arg1, (str)fileName, (float)scale) -> None :

Triggers a frame on an auxiliary off-screen window and writes the rendered image to a file.

The off-screen window can have a different size than the windows in which this view resides. The vertical field of view of the camera will be preserved.

This method waits until the image has been written.

When idle AA is enabled and the snapshot_at_idle attribute is set, the snapshot is taken when frame accumulation is finished.

Throws if fileName is empty or scale is negative or zero.

Parameters

fileName - Filename including extension.

scale - Scale factor that will be uniformly applied to the original view to obtain the final image.

snapshot( (View)arg1, (str)fileName, (tuple)resolution) -> None :

Triggers a frame on an auxiliary off-screen window and writes the rendered image to a file.

The off-screen window can have a different size than the windows in which this view resides. The vertical field of view of the camera will be preserved.

This method waits until the image has been written.

When idle AA is enabled and the snapshot_at_idle attribute is set, the snapshot is taken when frame accumulation is finished.

Throws if fileName is empty or if any of the resolution components is 0.

Parameters

fileName - Filename including extension.

resolution - Tuple containing the horizontal and vertical resolution that will be used to generate the final image.

rtneuron.net namespace

rtneuron.sceneops namespace

class rtneuron._rtneuron._sceneops.NeuronClipping

This class provides a branch level clipping operation for neurons.

Culling must be enabled in the scene that contains the target object. Otherwise this operation will have no effect.

The clipping state to apply is specified by a set of functions to make visible/invisible ranges of the morphological sections.

The culling mechanism discretizes sections in a predefined number of portions per section. Despite the API provides finer culling description, all the operations will work at the resolution defined by the implementation.

The current resolution is at most 32 portions per section (regardless of the section length).

Neuron clipping is affected by the representation mode in the following ways: When all representation modes are available and the mode is changed, the clipping masks are cleared before applying the masks required by the new mode.

If the neuron was created with NO_AXON or SOMA modes, changing the representation mode does not affect the current clipping.

Clipping does not have any effect on the SOMA representation mode under any circumstances.

When the representation mode is NO_AXON, axon sections cannot be unclipped.

Neuron clipping respects spatial partitions of DB configurations.

Version
2.7

Inherits from bbp.rtneuron.Scene.ObjectOperation, boost.enable_shared_from_this< NeuronClipping >

__init__((object)arg1) → None
clip((NeuronClipping)arg1, (object)sections, (object)starts, (object)ends) → NeuronClipping :

Mark section ranges for making them invisible.

Discrete section portions are clipped only if the given range fully contains them. The ranges are considered as closed interval. Ranges applied to section 0 (assumed to be the soma), are always converted into [0, 1]).

Subsequent calls to NeuronClipping.unclip will cut/split/remove the ranges to be applied.

Return
self for operation concatenation.

Parameters

sections - Section id list. Ids may be repeated.

starts - Relative start positions of the ranges. Each value must be smaller than the value at the same position of the ‘ends’ vector, otherwise the range is ignored.

ends - Relative end positions of the ranges. Each value must be greater than the value at the same position of the ‘starts’ vector, otherwise the range is ignored.

Exceptions

std.invalid_argument - if arrays have not the same size or if a range is ill-defined.
clipAll((NeuronClipping)arg1[, (bool)alsoSoma=False]) → NeuronClipping :

Make all neurites and optionally the soma invisible.

Return
self for operation concatenation.

Parameters

alsoSoma - If true, the soma will be clipped.
unclip((NeuronClipping)arg1, (object)sections, (object)starts, (object)ends) → NeuronClipping :

Mark section ranges for making them invisible.

Discrete section portions are clipped only if the given range fully contains them. The ranges are considered as closed interval. Ranges applied to section 0 (assumed to be the soma), are always converted into [0, 1]).

Subsequent calls to NeuronClipping.unclip will cut/split/remove the ranges to be applied.

Return
self for operation concatenation.

Parameters

sections - Section id list. Ids may be repeated.

starts - Relative start positions of the ranges. Each value must be smaller than the value at the same position of the ‘ends’ vector, otherwise the range is ignored.

ends - Relative end positions of the ranges. Each value must be greater than the value at the same position of the ‘starts’ vector, otherwise the range is ignored.

Exceptions

std.invalid_argument - if arrays have not the same size or if a range is ill-defined.
unclipAfferentBranches((NeuronClipping)arg1, (int)arg2, (Morphology)arg3, (Synapses)arg4) → NeuronClipping :

Apply the unclip masks that make visible the portions of efferent branches (dendrites) that connect soma of a neuron to the given synapses.

unclipAll((NeuronClipping)arg1) → NeuronClipping :

Make all neurites and the soma visible.

Return
self for operation concatenation.
unclipEfferentBranches((NeuronClipping)arg1, (int)arg2, (Morphology)arg3, (Synapses)arg4) → NeuronClipping :

Apply the unclip masks that make visible the portions of afferent branches (axon) that connect soma of a neuron to the given synapses.

Free functions

rtneuron.add_hexagonal_prism(scene, center, height, radius, color=[0.2, 0.4, 1.0, 0.2], line_width=2.5)

Add an hexagonal prism to a scene.

The prism is added as two objects, one for the faces and another one with an outline. The outline is rendered with black lines using GL_LINES. The line width can be chosen, but it must be >= 1.

rtneuron.apply_compartment_report(simulation, scene_or_view, report_name)

Load compartment report and apply it to the given scene.

The second parameter can be a Scene or a View. If a View is given, simulation display will be enabled on it.

rtneuron.apply_spike_data(simulation_or_filename, scene_or_view)

Load a spike file and apply it to the given scene.

The first parameter can be a Scene or a View. If a View is given, simulation display will be enabled on it.

rtneuron.create_scene(engine, circuit, neuron_targets, simulation=None, report_name=None, spikes=None, scene_attributes=None)

Create a scene object for an engine, assign a circuit to it, add the given targets and optionally setup simulation.

rtneuron.display_circuit(config=None, target=('Column', {'mode': rtneuron._rtneuron.RepresentationMode.SOMA}), report=None, spikes=None, eq_config='', argv=None, opengl_share_context=None)

Opens a simulation configuration and displays the given targets.

If no config is provided this function will try to load the Kaust circuit from a known location, otherwise it will try to load the test data set.

The target specification can be rather complex so it deserves a detailed explanation:

  • A target can be a single target element or a list of targets elements.
  • Each element can be a target key or a tuple of key and attributes.
  • Target keys can be of one of these types:
    • integer: Cell identifiers
    • numpy arrays of dtype u4, u8 or i4
    • string: Target labels. A target label can be in the form regex[%number], where regex is a valid Python regular expression and the optional suffix specifies a random subsampling of the target to a given percentage, e.g. Layer_[23]%10 will result in a 10% of targets Layer_2 and Layer_3.
    • an iterable object: Each element being a cell identifier.
  • The attributes for a target can be either AttributeMap objects or dictionaries. Possible attributes are documented in Scene.addNeurons

The following are examples of target specifications:

  • ‘Column’

  • (‘MiniColumn_0’, {‘mode’: RepresentationMode.SOMA})

  • [‘Layer1’, ‘Layer2’]

  • numpy.array([1, 2, 3, 4, 5], dtype=”u4”)

  • [(123, {‘color’: [1, 0, 0, 1]}),

    (range(1, 100), {‘color’: [0, 0.5, 1, 1]})]

A compartment report name can be provided. The spikes parameter can take a file name to read a spike report from a file or True to use the default spike report of the config.

The optional parameter share_context can be used to pass a QOpenGLContext to be assigned to the engine before init is called. This is used to integrate Qt overlays using the classes in the rtneuron.gui module.

This function affects two global variables of the rtneuron module:

  • engine, the RTNeuron engine. If already existing, the current configuration is exited before anything else, if not a new one is created.
  • simulation, the brain.Simulation opened.

Multi-node configurations are not supported by this method. Trying to do so has undefined behaviour (most probably a deadlock).

rtneuron.display_empty_scene(scene_attributes=<rtneuron._rtneuron.AttributeMap object>, argv=None, opengl_share_context=None)

Instantiate an RTNeuron engine with an empty scene.

Return the view object.

rtneuron.display_morphology_file(file_name, use_tubelets=True, soma=True, argv=None)

Display a morphology given its HDF5 file.

Parameters: - file_name (str): The path to the swc or h5 morphology path - use_tubelets (bool): If true, render branches using tubelets, otherwise

use pseudo-cylinders.
  • show_soma (bool): Add to the model an approximation of the soma as a sphere.

Temporary files with the circuit paths and description are created to be able to load the morphology and create a scene to be displayed. The morphology is shown using only tubelets, so no mesh is required. View frustum culling is disabled.

rtneuron.display_shared_synapses(presynaptic, postsynaptic, afferent=True, attributes=None)

Adds the afferent or efferent location of the synapses at which a presynaptic target innervates a postsynaptic one.

This function assumes that an engine and simulation are already setup. Synapses are added to the scene of the first view of the current enginen.

The neurons of both targets are supposed to be loaded as well as the morphologies of the presynaptic neurons for efferent locations and the morphologies of the postsynaptic neurons for afferent locations. For synapses that project into the soma, the presynaptic morphologies are also needed to find the afferent positions.

If the neurons are missing an exception will be thrown.

If a morphology is not available to compute the location of a synapse the synapse will be skipped and a warning message printed.

The pre and postsynaptic targets can be:

  • integer: Cell identifiers
  • an iterable of integers
  • a numpy array of u4, u8 or i4
  • string: A target labels

The optional attributes parameter takes an AttributeMap as input.

rtneuron.display_synapses(targets, afferent=True, attributes=None)

Adds the afferent or efferent synapses of a given cell target to the scene of the first view of the current application.

This function assumes that an application and simulation are already setup. Synapses are added to the scene of the first view of the current application.

The neurons of the given target (and efferent neurons in case of efferent synapses or soma afferent synapses) and the morphologies needed to find the locations of the synapses are also supposed to be already loaded. If the neurons are missing an exception will be thrown.

Synapses for which the morphology needed to compute the location is missing are skipped (an warning message will be printed in this case).

The target can be:

  • integer: Cell identifiers
  • an iterable of integers
  • a numpy array of u4, u8 or i4
  • string: A target label

The optional attributes parameter takes an AttributeMap as input.

rtneuron.snapshot_to_notebook(view)

Takes a snaphost of the given view and adds the image to the active IPython notebook.

rtneuron.start_app(name='circuit_viewer', *args, **kwargs)

Startup the app with the given name

Apps are searched as modules under rtneuron.apps. List and keyword arguments are forwarded to the app initialization function.

rtneuron.start_shell(local_ns=None, module=None)

Start an IPython shell.

The namespace of the IPython shell is the namespace of the rtneuron module unless another one is provided. A regular Python console is started if IPython is not available.

Helper modules

rtneuron.util

rtneuron.util.key_to_gids(key, resolver)

Convert a target key to a GID array

A key can be: - An integer - An str with a target name or regex (with an optional %n string appended,

being n a number between 0 and 100)
  • A numpy array of type u4, u8 or i4
  • An iterable of integers.

The resolver must be a brain.Simulation or brain.Circuit.

rtneuron.util.label_to_gids(label, resolver)

Convert a cell set label or regular expression to a gid set (numpy u4). label: str

A target name or regular expression. Regular expressions are only accepted if resolver is a brain.Simulation. The string can be appended a “%n” prefix to denote that a random fraction of the gid set is requested, being n a real number between 0 and 100.
resolver: brain.Simulation or brain.Circuit
The object that will translate target names into gid lists.
rtneuron.util.targets_to_gids(targets, resolver)

Return a numpy array with the gids of the targets given. Targets can be any object accepted by key_to_gids or an iterable of any of those, resolver must be a brain.Simulation or brian.Circuit

rtneuron.util.camera.set_manipulator_home_position(view, target, **kwargs)

Sets the home positions of the camera manipulator of the given view to a front view of the input cell target.

The input target can be one of: - single cell GID as integer - a numpy array of u4, i4 or u8 - a string with a target label (regular expressions included) - an iterable object, each element being a cell identifier

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.soma_positions(gids, circuit)

Return the position of the soma of a given neuron.

rtneuron.util.camera.Paths.apply_camera_path(path, view)

Apply a camera path to a view using a camera path manipulator.

rtneuron.util.camera.Paths.flythrough(simulation, targets, duration=10, **kwargs)

Return a camera path of a flythrough of a cell target.

This function will load the simulation config file given and all the neurons associated with the target specification. The camera position is computed based only on the soma positions and corresponds to a front view of the circuit. The path duration must be in seconds.

The targets parameter can be:

  • A cell GID (as integer)
  • An iterable of GIDs
  • A target labels (as a string)
  • A list of any of the above

The optional keyword arguments are:

  • samples: Number of keyframes to generate
  • speedup: From 1 to inf, this parameter specifies a speed up for the initial camera speed. Use 1 for a linear camera path, if higher that one the camera will start faster and will decrease its speed non-linearly and monotonically. Recommended values are between 1 and 3. The default value is 1.

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Paths.front_to_top_rotation(simulation, targets, duration=10, **kwargs)

Return a camera path of a rotation from front to top view of a set of circuit targets.

This function will load the simulation config file given and all the neurons associated with the target specification. The front and top camera positions are computed based on the soma positions. The path duration is in seconds.

The targets parameter can be:

  • A cell GID (as integer)
  • An iterable of GIDs
  • A target labels (as a string)
  • A list of any of the above

The optional keyword arguments are:

  • timing: A [0..1]->[0..1] function used to map sample timestamps from a uniform distribution to any user given distribution.
  • samples: Number of keyframes to generate

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Paths.front_view(simulation, targets, **kwargs)

Return a camera path with a front view a cell target.

This function will load the blue config file given and all the neurons associated with the target specification. The camera position is computed based only on the soma positions.

The target parameter can be:

  • A cell GID (as integer)
  • An iterable of GIDs
  • A target label (as a string)
  • A list of any of the above

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Paths.make_front_view(view, **kwargs)

Setup the camera position of the given view to look from the front at the neurons on the view’s scene.

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Paths.make_top_view(view, **kwargs)

Setup the camera position of the given view to look from the top at the neurons on the view’s scene.

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Paths.rotate_around(simulation, targets, duration=10, **kwargs)

Return a camera path of a front view rotation around a circuit target.

This function will load the simulation config file given and all the neurons associated with the target specification. The start position is computed based on the soma positions. The path duration is in seconds.

The target parameter can be:

  • A cell GID (as integer)
  • An iterable of GIDs
  • A target labels (as a string)
  • A list of any of the above

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Paths.rotation(look_at, axis, start, angle, up=[0, 1, 0], duration=10, **kwargs)

Return a camera path of a rotation around an arbitrary axis with a fixation point.

The parameters are:

  • look_at: The point at which the camera will look. The rotation axis is also placed at this point.
  • axis: A normalized vector used as rotation axis. The rotation sense is defined applying the right-hand rule to this vector.
  • start: The initial camera position. The distance of this point to center is preserved.
  • angle: The rotation angle of the final position in radians.
  • up: A normalized vector or one of the strings “axis” or “tangent”. This parameter defines the vector to which the y axis of the camera is aligned. If a normalized vector is given, that direction is used. For “axis”, the axis direction is used. For “tangent”, the up direction lies on the rotation plane and is tangent to the circular trajectory.

The optional keyword arguments are:

  • samples: Number of keyframes to generate
  • timing: A [0..1]->[0..1] function used to map sample timestamps from a uniform distribution to any user given distribution.
rtneuron.util.camera.Paths.top_view(simulation, targets, **kwargs)

Return a camera path with a top view of a cell target.

This function will load the simulation config file given and all the neurons associated with the target specification. The camera position is computed based only on the soma positions.

The target parameter can be:

  • A cell GID (as integer)
  • An iterable of GIDs
  • A target labels (as a string)
  • A list of any of the above

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Ortho.front_ortho(simulation, targets, **kwargs)

Return the frustum parameters for an orthogonal front view of a cell target.

This function will load a simulation configuration file to get the soma position of the neurons given their gids. The frustum size is computed based only on these positions. In order to frame the scene correctly an additional camera path has to be set up.

The target parameter can be:

  • A cell GID (as integer)
  • A numpy array of u4, u8 or i4
  • A target labels (as a string)
  • A list of any of the above

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Ortho.make_front_ortho(view, **kwargs)

Setup the camera projection and position of the given view to do an orthographic front projection of the neurons on its scene.

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Ortho.make_top_ortho(view, **kwargs)

Setup a the camera projection and position of the given view to do an orthographic top projection of the neurons on its scene.

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.
rtneuron.util.camera.Ortho.top_ortho(simulation, targets, **kwargs)

Return the frustum parameters for an orthogonal top view of a cell target.

This function will load the blueconfig file given and all the neurons associated with the target specification. The frustum size is computed based only on the soma positions. In order to frame the scene correctly an additional camera path has to be set up.

The target parameter can be:

  • A cell GID (as integer)
  • A numpy array of u4, u8 or i4
  • A target labels (as a string)
  • A list of any of the above

The keyword arguments used to fit the viewpoint are:

  • air_pixels: a list of two floats with the desired fraction of empty horizontal and vertical space. This applies to the method used to frame the cell somas, i.e, branches are not considered. The default value is [0.1, 0.1].
  • fit_point_generator: the function used to determine the position of a neuron. Possible values:
    • rtneuron.util.camera.soma_positions (default, fastest)
    • rtneuron.util.camera.dendrite_endpoint_positions (more precise, slower)
  • point_radius: If provided, the points to be bound will be considered spheres of the given radius. If ommited and the fit point list has a single point, it will be considered as a sphere of radius 200.
  • aspect_ratio: ratio of width/height of the image. The default corresponds to the default frustum
  • vertical_fov: vertical field of view angle in degrees of the camera to be used. The default corresponds to the default frustum.

rtneuron.sceneops

class rtneuron.sceneops.SynapticProjections.SynapticProjections(scene, presynaptic_color=[0.0, 0.5, 1.0, 1.0], postsynaptic_color=[1.0, 0.0, 0.0, 1.0], unselected_color=[0, 0, 0, 0.1], target_mode=rtneuron._rtneuron.RepresentationMode.WHOLE_NEURON, clip_branches=True)

This class provides functions to show synaptic projections in a given scene/microcircuit.

It provides an easy way to display retrograde and anterograde projections for cells selected in the scene and a method to display the synaptic pathways from a pre-synaptic target to a post-synaptic target.

A callback is hooked to the cellSelected signal from a scene to show:

  • No cell selected, all displayed with somas
  • Retrograde projections: A post-synaptic cell and its pre-synaptic cells with the selected representation mode and colors
  • Anterograde projections: A pre-synaptic cell and its post-synaptic cells with the selected representation mode and colors

The representation modes for pre/post synaptic sets are:

  • Soma only
  • Whole detailed neurons
  • Detailed neuron with branch-level clipping to show only the portions of the branches that run along the path that connects the soma of the presynaptic cell to the soma of the postsynaptic cell through each synapse.

For synaptic projections between two sets, the detailed representation with branch level culling is always used.

__init__(scene, presynaptic_color=[0.0, 0.5, 1.0, 1.0], postsynaptic_color=[1.0, 0.0, 0.0, 1.0], unselected_color=[0, 0, 0, 0.1], target_mode=rtneuron._rtneuron.RepresentationMode.WHOLE_NEURON, clip_branches=True)

Store the circuit and scene information and hook a callback to the scene.cellCelected signal to switch between the different synaptic projections modes.

Parameters:

  • presynaptic_color: Color to use for presynaptic cells
  • postsynaptic_color: Color to use for postsynaptic cells
  • unselected_color: Color to use for cells which are not part of the anterograde or retrograde set of a selected cell.
  • target_mode: Representation mode to use for the anterograde/retrograde cell set
  • clip_branches: target_mode is WHOLE_NEURON, apply fine-grained clipping to branches to highlight only the paths that connect the pre and post-synaptic somas through each synapse.
set_postsynaptic_attributes(attributes)

Sets the given attributes on the handlers of connected, postsynaptic only cells.

set_presynaptic_attributes(attributes)

Sets the given attributes on the handlers of connected, presynaptic only cells.

show_anterograde_projections(gid, subset=None)

Find the postsynaptic cells of the given one and display them according to the current attributes for mode, color and clipping.

show_retrograde_projections(gid, subset=None)

Find the presynaptic cells of the given one and display them according to the current attributes for mode, color and clipping.