API Reference

Acquisition event specification

The following shows all possible fields in an acquisition event (not all of which are required). An acquisition event which does not contain either the ‘channel’ key or the ‘axes’ key will not acquire an image, and can be used to control hardware only.

event = {
      # A dictionary with the positions along various axes (e.g. time point index,
      # z-slice index, etc.).
      'axes': {
              # Axis names can be any string
              'axis1_name': 1,

              # They can take integer or string values
              'axis2_name': 'first_position',

              # "channel" is a special axis name which will lead to different positions being
              # overlayed in different colors in the image viewer
              'channel': 'DAPI',

              # If an XYTiledAcquisition is being used, "row" and "column" are special
              # values that the acquisition engine will convert into stage coordinates,
              # laying out the acquired images in a grid
              'row': 1,
              'column': 0,

              },

      # Config groups can be used to control groups of properties
      'config_group': ['name_of_config_group', 'name_of_preset'],

      'exposure': 10.6, # exposure time in ms

      # Z stages
      # The 'z' field controls the default z stage device (i.e. the Core-Focus device)
      'z': 123.45, # the z position in um

      # Alternatively the device can be specified by name, or multiple devices can
      # be controlled by providing their names and positions in um
      'stage_positions':
                      [['z_stage1_name': 12.34],
                      ['z_stage2_name': 1234.566]],


      # For timelapses: how long to wait before starting next time point in s
      'min_start_time': 100

      # For XY stages
      'x': 123.4, # positions in um
      'y': 567.8,



      # If using a camera other than the 'Core-camera', it can be specified by name here
      'camera': 'a_camera_device_name',


      # Other arbitrary hardware settings can be encoded in a list of strings with
      # each entry containing the name of the device, the name of the property,
      # and the value of the property
      'properties': [['DeviceName', 'PropertyName', 'PropertyValue'],
              ['OtherDeviceName', 'OtherPropertyName', 'OtherPropertyValue']],


      # Custom metadata can be added to the event, which will be added to the metadata
      # of the resultant image under the 'tags' key
      'tags': {
              'whatever_you_want_here': 54,
              'something_else': 'here'}


      }

Acquisition APIs

Acquisition

class pycromanager.acquisition.acquisition_superclass.Acquisition(directory: str | None = None, name: str = 'default_acquisition_name', image_process_fn: callable | None = None, event_generation_hook_fn: callable | None = None, pre_hardware_hook_fn: callable | None = None, post_hardware_hook_fn: callable | None = None, post_camera_hook_fn: callable | None = None, notification_callback_fn: callable | None = None, image_saved_fn: callable | None = None, napari_viewer=None, debug: int = False)
Parameters:
  • directory (str) – saving directory for this acquisition. If it is not supplied, the image data will be stored in RAM (Java backend only)

  • name (str) – Name of the acquisition. This will be used to generate the folder where the data is saved.

  • image_process_fn (Callable) – image processing function that will be called on each image that gets acquired. Can either take two arguments (image, metadata) where image is a numpy array and metadata is a dict containing the corresponding image metadata. Or a three argument version is accepted, which accepts (image, metadata, queue), where queue is a Queue object that holds upcoming acquisition events. The function should return either an (image, metadata) tuple or a list of such tuples

  • event_generation_hook_fn (Callable) – hook function that will as soon as acquisition events are generated (before hardware sequencing optimization in the acquisition engine. This is useful if one wants to modify acquisition events that they didn’t generate (e.g. those generated by a GUI application). Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • pre_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just before a sequence of instructions are dispatched to the hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just after a sequence of instructions are dispatched to the hardware, but before the camera sequence has been started. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_camera_hook_fn (Callable) – hook function that will be run just after the camera has been triggered to snapImage or startSequence. A common use case for this hook is when one want to send TTL triggers to the camera from an external timing device that synchronizes with other hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • notification_callback_fn (Callable) – function that will be called whenever a notification is received from the acquisition engine. These include various stages of the control of hardware and the camera and saving of images. Notification callbacks will execute asynchronously with respect to the acquisition process. The supplied function should take a single argument, which will be an AcqNotification object. It should execute quickly, so as to not back up the processing of other notifications.

  • image_saved_fn (Callable) – function that takes two arguments (the Axes of the image that just finished saving, and the Dataset) or three arguments (Axes, Dataset and the event_queue) and gets called whenever a new image is written to disk

  • napari_viewer (napari.Viewer) – Provide a napari viewer to display acquired data in napari (https://napari.org/) rather than the built-in NDViewer. None by default. Data is added to the ‘pycromanager acquisition’ layer, which may be pre-configured by the user

  • debug (bool) – whether to print debug messages

  • show_display (bool) – If True, show the image viewer window. If False, show no viewer. (Java backend only)

  • saving_queue_size (int) – The number of images to queue (in memory) while waiting to write to disk. Higher values should in theory allow sequence acquisitions to go faster, but requires the RAM to hold images while they are waiting to save (Java backend only)

  • timeout – Timeout in ms for connecting to Java side (Java backend only)

  • port – Allows overriding the default port for using Java backends on a different port. Use this after calling start_headless with the same non-default port (Java backend only)

abort(exception=None)

Cancel any pending events and shut down immediately

Parameters:

exception (Exception) – The exception that is the reason abort is being called

acquire(event_or_events: dict) AcquisitionFuture

Submit an event or a list of events for acquisition. A single event is a python dictionary with a specific structure. The acquisition engine will determine if multiple events can be merged into a hardware sequence and executed at once without computer-hardware communication in between. This sequencing will only take place for events that are within a single call to acquire, so if you want to ensure this doesn’t happen, call acquire multiple times with each event in a list individually.

Parameters:

event_or_events (list, dict, Generator) – A single acquistion event (a dict), a list of acquisition events, or a generator that yields acquisition events.

abstract await_completion()

Wait for acquisition to finish and resources to be cleaned up. If data is being written to disk, this will wait for the data to be written before returning.

get_dataset()

Get access to the dataset backing this acquisition

abstract get_viewer()

Return a reference to the current viewer, if the show_display argument was set to True. The returned object is either an instance of NDViewer or napari.Viewer()

mark_finished()

Signal to acquisition that no more events will be added and it is time to initiate shutdown. This is only needed if the context manager (i.e. “with Acquisition…”) is not used.

multi_d_acquisition_events

pycromanager.multi_d_acquisition_events(num_time_points: int | None = None, time_interval_s: float | List[float] = 0, z_start: float | None = None, z_end: float | None = None, z_step: float | None = None, channel_group: str | None = None, channels: list | None = None, channel_exposures_ms: list | None = None, xy_positions: Iterable | None = None, xyz_positions: Iterable | None = None, position_labels: List[str] | None = None, order: str = 'tpcz')

Convenience function for generating the events of a typical multi-dimensional acquisition (i.e. an acquisition with some combination of multiple timepoints, channels, z-slices, or xy positions)

Parameters:
  • num_time_points (int) – How many time points if it is a timelapse (Default value = None)

  • time_interval_s (float or list of floats) – the minimum interval between consecutive time points in seconds. If set to 0, the acquisition will go as fast as possible. If a list is provided, its length should be equal to ‘num_time_points’. Elements in the list are assumed to be the intervals between consecutive timepoints in the timelapse. First element in the list indicates delay before capturing the first image (Default value = 0)

  • z_start (float) – z-stack starting position, in µm. If xyz_positions is given z_start is relative to the points’ z position. (Default value = None)

  • z_end (float) – z-stack ending position, in µm. If xyz_positions is given z_start is relative to the points’ z position. (Default value = None)

  • z_step (float) – step size of z-stack, in µm (Default value = None)

  • channel_group (str) – name of the channel group (which should correspond to a config group in micro-manager) (Default value = None)

  • channels (list of strings) – list of channel names, which correspond to possible settings of the config group (e.g. [‘DAPI’, ‘FITC’]) (Default value = None)

  • channel_exposures_ms (list of floats or ints) – list of camera exposure times corresponding to each channel. The length of this list should be the same as the the length of the list of channels (Default value = None)

  • xy_positions (iterable) – An array of shape (N, 2) containing N (X, Y) stage coordinates. (Default value = None)

  • xyz_positions (iterable) – An array of shape (N, 3) containing N (X, Y, Z) stage coordinates. (Default value = None). If passed then z_start, z_end, and z_step will be relative to the z_position in xyz_positions. (Default value = None)

  • position_labels (iterable) – An array of length N containing position labels for each of the XY stage positions. (Default value = None)

  • order (str) – string that specifies the order of different dimensions. Must have some ordering of the letters c, t, p, and z. For example, ‘tcz’ would run a timelapse where z stacks would be acquired at each channel in series. ‘pt’ would move to different xy stage positions and run a complete timelapse at each one before moving to the next (Default value = ‘tpcz’)

Returns:

events

Return type:

dict

XYTiledAcquisition

class pycromanager.XYTiledAcquisition(tile_overlap: int, directory: str | None = None, name: str | None = None, max_multi_res_index: int | None = None, image_process_fn: callable | None = None, pre_hardware_hook_fn: callable | None = None, post_hardware_hook_fn: callable | None = None, post_camera_hook_fn: callable | None = None, show_display: bool = True, image_saved_fn: callable | None = None, saving_queue_size: int = 20, timeout: int = 1000, port: int = 4827, debug: bool = False)

For making tiled images with an XY stage and multiresolution saving (e.g. for making one large contiguous image of a sample larger than the field of view)

Parameters:
  • tile_overlap (int or tuple of int) – If given, XY tiles will be laid out in a grid and multi-resolution saving will be actived. Argument can be a two element tuple describing the pixel overlaps between adjacent tiles. i.e. (pixel_overlap_x, pixel_overlap_y), or an integer to use the same overlap for both. For these features to work, the current hardware configuration must have a valid affine transform between camera coordinates and XY stage coordinates

  • directory (str) – saving directory for this acquisition. If it is not supplied, the image data will be stored in RAM (Java backend only)

  • name (str) – Name of the acquisition. This will be used to generate the folder where the data is saved.

  • max_multi_res_index (int) – Maximum index to downsample to in multi-res pyramid mode. 0 is no downsampling, 1 is downsampled up to 2x, 2 is downsampled up to 4x, etc. If not provided, it will be dynamically calculated and updated from data

  • image_process_fn (Callable) – image processing function that will be called on each image that gets acquired. Can either take two arguments (image, metadata) where image is a numpy array and metadata is a dict containing the corresponding image metadata. Or a three argument version is accepted, which accepts (image, metadata, queue), where queue is a Queue object that holds upcoming acquisition events. The function should return either an (image, metadata) tuple or a list of such tuples

  • pre_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just before a sequence of instructions are dispatched to the hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just after a sequence of instructions are dispatched to the hardware, but before the camera sequence has been started. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_camera_hook_fn (Callable) – hook function that will be run just after the camera has been triggered to snapImage or startSequence. A common use case for this hook is when one want to send TTL triggers to the camera from an external timing device that synchronizes with other hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • show_display (bool) – If True, show the image viewer window. If False, show no viewer. (Java backend only)

  • image_saved_fn (Callable) – function that takes two arguments (the Axes of the image that just finished saving, and the Dataset) or three arguments (Axes, Dataset and the event_queue) and gets called whenever a new image is written to disk

  • saving_queue_size (int) – The number of images to queue (in memory) while waiting to write to disk. Higher values should in theory allow sequence acquisitions to go faster, but requires the RAM to hold images while they are waiting to save (Java backend only)

  • timeout – Timeout in ms for connecting to Java side (Java backend only)

  • port – Allows overriding the default port for using Java backends on a different port. Use this after calling start_headless with the same non-default port (Java backend only)

  • debug (bool) – whether to print debug messages

ExploreAcquisition

class pycromanager.ExploreAcquisition(directory: str, name: str, z_step_um: float, tile_overlap: int, channel_group: str | None = None, image_process_fn: callable | None = None, pre_hardware_hook_fn: callable | None = None, post_hardware_hook_fn: callable | None = None, post_camera_hook_fn: callable | None = None, show_display: bool = True, image_saved_fn: callable | None = None, saving_queue_size: int = 20, timeout: int = 2500, port: int = 4827, debug: bool = False)

Launches a user interface for an “Explore Acquisition”–a type of XYTiledAcquisition in which acquisition events come from the user dynamically driving the stage and selecting areas to image

Parameters:
  • directory (str) – saving directory for this acquisition. If it is not supplied, the image data will be stored in RAM (Java backend only)

  • name (str) – Name of the acquisition. This will be used to generate the folder where the data is saved.

  • z_step_um (str) – Spacing between successive z planes, in microns

  • tile_overlap (int or tuple of int) – If given, XY tiles will be laid out in a grid and multi-resolution saving will be activated. Argument can be a two element tuple describing the pixel overlaps between adjacent tiles. i.e. (pixel_overlap_x, pixel_overlap_y), or an integer to use the same overlap for both. For these features to work, the current hardware configuration must have a valid affine transform between camera coordinates and XY stage coordinates

  • channel_group (str) – Name of a config group that provides selectable channels

  • image_process_fn (Callable) – image processing function that will be called on each image that gets acquired. Can either take two arguments (image, metadata) where image is a numpy array and metadata is a dict containing the corresponding image metadata. Or a three argument version is accepted, which accepts (image, metadata, queue), where queue is a Queue object that holds upcoming acquisition events. The function should return either an (image, metadata) tuple or a list of such tuples

  • pre_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just before a sequence of instructions are dispatched to the hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just after a sequence of instructions are dispatched to the hardware, but before the camera sequence has been started. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_camera_hook_fn (Callable) – hook function that will be run just after the camera has been triggered to snapImage or startSequence. A common use case for this hook is when one want to send TTL triggers to the camera from an external timing device that synchronizes with other hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • show_display (bool) – If True, show the image viewer window. If False, show no viewer. (Java backend only)

  • image_saved_fn (Callable) – function that takes two arguments (the Axes of the image that just finished saving, and the Dataset) or three arguments (Axes, Dataset and the event_queue) and gets called whenever a new image is written to disk

  • saving_queue_size (int) – The number of images to queue (in memory) while waiting to write to disk. Higher values should in theory allow sequence acquisitions to go faster, but requires the RAM to hold images while they are waiting to save (Java backend only)

  • timeout – Timeout in ms for connecting to Java side (Java backend only)

  • port – Allows overriding the default port for using Java backends on a different port. Use this after calling start_headless with the same non-default port (Java backend only)

  • debug (bool) – whether to print debug messages

MagellanAcquisition

class pycromanager.MagellanAcquisition(magellan_acq_index: int | None = None, magellan_explore: bool = False, image_process_fn: callable | None = None, event_generation_hook_fn: callable | None = None, pre_hardware_hook_fn: callable | None = None, post_hardware_hook_fn: callable | None = None, post_camera_hook_fn: callable | None = None, image_saved_fn: callable | None = None, timeout: int = 500, port: int = 4827, debug: bool = False)

Class used for launching Micro-Magellan acquisitions. Must pass either magellan_acq_index or magellan_explore as an argument

Parameters:
  • magellan_acq_index (int) – run this acquisition using the settings specified at this position in the main GUI of micro-magellan (micro-manager plugin). This index starts at 0

  • magellan_explore (bool) – Run a Micro-magellan explore acquisition

  • image_process_fn (Callable) – image processing function that will be called on each image that gets acquired. Can either take two arguments (image, metadata) where image is a numpy array and metadata is a dict containing the corresponding image metadata. Or a three argument version is accepted, which accepts (image, metadata, queue), where queue is a Queue object that holds upcoming acquisition events. The function should return either an (image, metadata) tuple or a list of such tuples

  • event_generation_hook_fn (Callable) – hook function that will as soon as acquisition events are generated (before hardware sequencing optimization in the acquisition engine. This is useful if one wants to modify acquisition events that they didn’t generate (e.g. those generated by a GUI application). Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • pre_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just before a sequence of instructions are dispatched to the hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_hardware_hook_fn (Callable) – hook function that will be run just before the hardware is updated before acquiring a new image. In the case of hardware sequencing, it will be run just after a sequence of instructions are dispatched to the hardware, but before the camera sequence has been started. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • post_camera_hook_fn (Callable) – hook function that will be run just after the camera has been triggered to snapImage or startSequence. A common use case for this hook is when one want to send TTL triggers to the camera from an external timing device that synchronizes with other hardware. Accepts either one argument (the current acquisition event) or two arguments (current event, event_queue)

  • image_saved_fn (Callable) – function that takes two arguments (the Axes of the image that just finished saving, and the Dataset) or three arguments (Axes, Dataset and the event_queue) and gets called whenever a new image is written to disk

  • timeout – Timeout in ms for connecting to Java side (Java backend only)

  • port – Allows overriding the default port for using Java backends on a different port. Use this after calling start_headless with the same non-default port (Java backend only)

  • debug (bool) – whether to print debug messages

Headless mode

start_headless

pycromanager.start_headless(mm_app_path: str, config_file: str | None = None, java_loc: str | None = None, python_backend=False, core_log_path: str = '', buffer_size_mb: int = 1024, max_memory_mb: int = 2000, port: int = 4827, debug=False)

Start a Java process that contains the neccessary libraries for pycro-manager to run, so that it can be run independently of the Micro-Manager GUI/application. This calls will create and initialize MMCore with the configuration file provided.

On windows plaforms, the Java Runtime Environment will be grabbed automatically as it is installed along with the Micro-Manager application.

On non-windows platforms, it may need to be installed/specified manually in order to ensure compatibility. Installing Java 11 is the most likely version to work without issue

Parameters:
  • mm_app_path (str) – Path to top level folder of Micro-Manager installation (made with graphical installer)

  • config_file (str) – Path to micro-manager config file, with which core will be initialized. If None then initialization is left to the user.

  • java_loc (str) – Path to the java version that it should be run with (Java backend only)

  • python_backend (bool) – Whether to use the python backend or the Java backend

  • core_log_path (str) – Path to where core log files should be created

  • buffer_size_mb (int) – Size of circular buffer in MB in MMCore

  • max_memory_mb (int) – Maximum amount of memory to be allocated to JVM (Java backend only

  • port (int) – Default port to use for ZMQServer (Java backend only)

  • debug (bool) – Print debug messages

stop_headless

pycromanager.stop_headless(debug=False)

Dataset

class pycromanager.Dataset(dataset_path=None, file_io: ~ndtiff.file_io.NDTiffFileIO = <ndtiff.file_io.NDTiffFileIO object>, _summary_metadata=None)

Generic class for opening NDTiff datasets. Creating an instance of this class will automatically return an instance of the class appropriate to the version and type of NDTiff dataset required

class ndtiff.NDTiffDataset(dataset_path=None, file_io: ~ndtiff.file_io.NDTiffFileIO = <ndtiff.file_io.NDTiffFileIO object>, _summary_metadata=None, **kwargs)

Class that opens a single NDTiff dataset

Provides access to an NDTiffStorage dataset, either one currently being acquired or one on disk

Parameters:
  • dataset_path (str) – Abosolute path of top level folder of a dataset on disk

  • file_io (ndtiff.file_io.NDTiffFileIO) – A container containing various methods for interacting with files.

  • _summary_metadata (dict) – Summary metadata for a dataset that is currently being acquired. Users shouldn’t call this

as_array(axes=None, stitched=False, **kwargs)

Read all data image data as one big Dask array with last two axes as y, x and preceeding axes depending on data. The dask array is made up of memory-mapped numpy arrays, so the dataset does not need to be able to fit into RAM. If the data doesn’t fully fill out the array (e.g. not every z-slice collected at every time point), zeros will be added automatically.

To convert data into a numpy array, call np.asarray() on the returned result. However, doing so will bring the data into RAM, so it may be better to do this on only a slice of the array at a time.

Parameters:
  • axes (list) – list of axes names over which to iterate and merge into a stacked array. The order of axes supplied in this list will be the order of the axes of the returned dask array. If None, all axes will be used in PTCZYX order.

  • stitched (bool) – If true and tiles were acquired in a grid, lay out adjacent tiles next to one another (Default value = False)

  • **kwargs – names and integer positions of axes on which to slice data

Returns:

dataset

Return type:

dask array

get_channel_names()
Returns:

list of channel names (strings)

get_index_keys()

Return a list of every combination of axes that has a image in this dataset

has_image(channel=None, z=None, time=None, position=None, row=None, column=None, **kwargs)

Check if this image is present in the dataset

Parameters:
  • channel (int or str) – index of the channel, if applicable (Default value = None)

  • z (int) – index of z slice, if applicable (Default value = None)

  • time (int) – index of the time point, if applicable (Default value = None)

  • position (int) – index of the XY position, if applicable (Default value = None)

  • row (int) – index of tile row for XY tiled datasets (Default value = None)

  • column (int) – index of tile column for XY tiled datasets (Default value = None)

  • **kwargs – names and integer positions of any other axes

Returns:

indicating whether the dataset has an image matching the specifications

Return type:

bool

has_new_image()

For datasets currently being acquired, check whether a new image has arrived since this function was last called, so that a viewer displaying the data can be updated.

read_image(channel=None, z=None, time=None, position=None, row=None, column=None, **kwargs)

Read image data as numpy array

Parameters:
  • channel (int or str) – index of the channel, if applicable (Default value = None)

  • z (int) – index of z slice, if applicable (Default value = None)

  • time (int) – index of the time point, if applicable (Default value = None)

  • position (int) – index of the XY position, if applicable (Default value = None)

  • row (int) – index of tile row for XY tiled datasets (Default value = None)

  • column (int) – index of tile column for XY tiled datasets (Default value = None)

  • **kwargs – names and integer positions of any other axes

Returns:

image – image as a 2D numpy array, or tuple with image and image metadata as dict

Return type:

numpy array or tuple

read_metadata(channel=None, z=None, time=None, position=None, row=None, column=None, **kwargs)

Read metadata only. Faster than using read_image to retrieve metadata

Parameters:
  • channel (int or str) – index of the channel, if applicable (Default value = None)

  • z (int) – index of z slice, if applicable (Default value = None)

  • time (int) – index of the time point, if applicable (Default value = None)

  • position (int) – index of the XY position, if applicable (Default value = None)

  • row (int) – index of tile row for XY tiled datasets (Default value = None)

  • column (int) – index of tile col for XY tiled datasets (Default value = None)

  • **kwargs – names and integer positions of any other axes

Returns:

metadata

Return type:

dict

class ndtiff.NDTiffPyramidDataset(dataset_path=None, file_io: ~ndtiff.file_io.NDTiffFileIO = <ndtiff.file_io.NDTiffFileIO object>, _summary_metadata=None)

Class that opens a single NDTiffStorage multi-resolution pyramid dataset

Provides access to a NDTiffStorage pyramid dataset, either one currently being acquired or one on disk

Parameters:
  • dataset_path (str) – Abosolute path of top level folder of a dataset on disk

  • file_io (ndtiff.file_io.NDTiffFileIO) – A container containing various methods for interacting with files.

  • _summary_metadata (dict) – Summary metadata, only not None for in progress datasets. Users need not call directly

as_array(axes=None, stitched=False, res_level=None, **kwargs)

Read all data image data as one big Dask array with last two axes as y, x and preceeding axes depending on data. The dask array is made up of memory-mapped numpy arrays, so the dataset does not need to be able to fit into RAM. If the data doesn’t fully fill out the array (e.g. not every z-slice collected at every time point), zeros will be added automatically.

To convert data into a numpy array, call np.asarray() on the returned result. However, doing so will bring the data into RAM, so it may be better to do this on only a slice of the array at a time.

Parameters:
  • axes (list) – list of axes names over which to iterate and merge into a stacked array. If None, all axes will be used. The order of axes supplied in this list will be the order of the axes of the returned dask array

  • stitched (bool) – Lay out adjacent tiles next to one another to form a larger image (Default value = False)

  • res_level (int or None) – the resolution level to return. If None, return all resolutions in a list

  • **kwargs – names and integer positions of axes on which to slice data

Returns:

dataset

Return type:

dask array

get_index_keys(res_level=0)

Return a list of every combination of axes that has a image in this dataset

has_image(channel=None, z=None, time=None, position=None, resolution_level=0, row=None, column=None, **kwargs)

Check if this image is present in the dataset

Parameters:
  • channel (int) – index of the channel, if applicable (Default value = None)

  • z (int) – index of z slice, if applicable (Default value = None)

  • time (int) – index of the time point, if applicable (Default value = None)

  • position (int) – index of the XY position, if applicable (Default value = None)

  • row (int) – index of tile row for XY tiled datasets (Default value = None)

  • column (int) – index of tile column for XY tiled datasets (Default value = None)

  • resolution_level – 0 is full resolution, otherwise represents downampling of pixels at 2 ** (resolution_level) (Default value = 0)

  • **kwargs – names and integer positions of any other axes

Returns:

indicating whether the dataset has an image matching the specifications

Return type:

bool

has_new_image()

For datasets currently being acquired, check whether a new image has arrived since this function was last called, so that a viewer displaying the data can be updated.

read_image(channel=None, z=None, time=None, position=None, row=None, column=None, resolution_level=0, **kwargs)

Read image data as numpy array

Parameters:
  • channel (int) – index of the channel, if applicable (Default value = None)

  • z (int) – index of z slice, if applicable (Default value = None)

  • time (int) – index of the time point, if applicable (Default value = None)

  • position (int) – index of the XY position, if applicable (Default value = None)

  • row (int) – index of tile row for XY tiled datasets (Default value = None)

  • column (int) – index of tile col for XY tiled datasets (Default value = None)

  • resolution_level – 0 is full resolution, otherwise represents downampling of pixels at 2 ** (resolution_level) (Default value = 0)

  • **kwargs – names and integer positions of any other axes

Returns:

image – image as a 2D numpy array, or tuple with image and image metadata as dict

Return type:

numpy array or tuple

read_metadata(channel=None, z=None, time=None, position=None, row=None, column=None, resolution_level=0, **kwargs)

Read metadata only. Faster than using read_image to retrieve metadata

Parameters:
  • channel (int) – index of the channel, if applicable (Default value = None)

  • z (int) – index of z slice, if applicable (Default value = None)

  • time (int) – index of the time point, if applicable (Default value = None)

  • position (int) – index of the XY position, if applicable (Default value = None)

  • row (int) – index of tile row for XY tiled datasets (Default value = None)

  • column (int) – index of tile col for XY tiled datasets (Default value = None)

  • resolution_level – 0 is full resolution, otherwise represents downampling of pixels at 2 ** (resolution_level) (Default value = 0)

  • **kwargs – names and integer positions of any other axes

Returns:

metadata

Return type:

dict

Micro-Manager Core

The core API is discovered dynamically at runtime, though not every method is implemented. Typing core. and using autocomplete with IPython is the best way to discover which functions are available. Documentation on for the Java version of the core API (which pycromanager calls) can be found here.

class pycromanager.Core(**kwargs)

Return a remote Java ZMQ Core, or a local Python Core, if the start_headless has been called with a Python backend

Java objects and classes

class pycromanager.JavaObject(classpath, args: list | None = None, port=4827, timeout=500, new_socket=False, convert_camel_case=True, debug=False)
Instance of a an object on the Java side. Returns a Python “Shadow” of the object, which behaves

just like the object on the Java side (i.e. same methods, fields). Methods of the object can be inferred at runtime using iPython autocomplete

classpath: str

Full classpath of the java object

args: list

list of constructor arguments

port: int

The port of the Bridge used to create the object

new_socket: bool

If True, will create new java object on a new port so that blocking calls will not interfere with the bridges main port

convert_camel_casebool

If True, methods for Java objects that are passed across the bridge will have their names converted from camel case to underscores. i.e. class.methodName() becomes class.method_name()

debug:

print debug messages

class pycromanager.JavaClass(classpath, port=4827, timeout=500, new_socket=False, convert_camel_case=True, debug=False)
Get an an object corresponding to a java class, for example to be used

when calling static methods on the class directly

classpath: str

Full classpath of the java calss

port: int

The port of the Bridge used to create the object

new_socket: bool

If True, will create new java object on a new port so that blocking calls will not interfere with the bridges main port

convert_camel_casebool

If True, methods for Java objects that are passed across the bridge will have their names converted from camel case to underscores. i.e. class.methodName() becomes class.method_name()

debug:

print debug messages

Convenience classes for special Java objects

class pycromanager.Magellan(convert_camel_case=True, port=4827, timeout=500, new_socket=False, debug=False)

An instance of the Micro-Magellan API

convert_camel_casebool

If True, methods for Java objects that are passed across the bridge will have their names converted from camel case to underscores. i.e. class.methodName() becomes class.method_name()

port: int

The port of the Bridge used to create the object

new_socket: bool

If True, will create new java object on a new port so that blocking calls will not interfere with the bridges main port

debug: bool

print debug messages

class pycromanager.Studio(convert_camel_case=True, port=4827, timeout=500, new_socket=False, debug=False)

An instance of the Studio object that provides access to micro-manager Java APIs

convert_camel_casebool

If True, methods for Java objects that are passed across the bridge will have their names converted from camel case to underscores. i.e. class.methodName() becomes class.method_name()

port: int

The port of the Bridge used to create the object

new_socket: bool

If True, will create new java object on a new port so that blocking calls will not interfere with the bridges main port

debug: bool

print debug messages

Logging control

set_logging_instance

reset_logger_instance

pycromanager.reset_logger_instance() None

Resets the logger to the default logger