Function: PythONI

Function: PythONI

What's PythONI?

NimOS comes with its own python IDE called PythONI (/pi thon ni/). The simple python IDE allows you to write your own automation python script for controlling almost every part of the Nanoimager. This instruction will cover what's included and what can be achieved using PythONI. Similar to any other type of programming, you will learn more if you try it yourself and please don't limit yourself to this instruction. For general python tutorials, check here.


General Instruction

To access the NimOS python virtual environment:
  1. Open the Windows Command Prompt.
  2. Navigate to "C:/Program Files/OxfordNanoimaging/.venv/Scripts" using the "cd" command:
  3. Activate the NimOS python virtual environment by running "activate.bat":

  4. Do whatever you would like to, for example, install a new module using "pip install [module]". Note that the NimOS python version is 3.6.6, and it is not recommended to upgrade the NimOS python version.
To access the PythONI interface in NimOS:
  1. Click on Advanced > Python Console to open the PythONI window:
  2. In the pop-up window, do whatever you would like to, for example, write a simple script to start an acquisition.
  3. All example python scripts are available in Global Scripts and stored in "C:/ProgramData/OxfordNi_Nim/python_scripts". Some user-defined python scripts are available in User Scripts and stored in "C:/Users/ONI/AppData/Local/OxfordNi_Nim/python_scripts". It is highly recommended to review these example scripts for a better understanding of how to use certain functions in actual "combat".
  4. There are several ways to run a python script from the PythONI interface:
    1. Load the python script from File > Open and click on Run Code to run.
    2. Right-click on any example script and click on Execute Script to run.
    3. Execute a python script with arguments through the command line using the following commands:
      import sys
      sys.argv=[
      'C:/users/oni/desktop/example.py', '--key1', 'argv1', '--key2', 'argv2'] # add arguments
      exec(open('C:/users/oni/desktop/example.py').read()) # run a python script with arguments


Cheatsheet (Function Blocks)

  1. How to fetch a frame (or a channel) from the imaging camera and save it as an image
    # read the latest frame (raw) from camera and save it as a tiff image
    def save_frame_from_camera(o='C:/users/oni/desktop/image.tif'):
        raw=camera.GetLatestImage()
    # get the latest frame from camera
        p=raw.Pixels
    # read the pixels
        h=raw.Dims.Height
    # read the height
        w=raw.Dims.Width
    # read the width
       
    # import numpy as np # import numpy if not done
        im=np.array(p).reshape((h,w))
    # cast to a numpy.array
       
    import skimage.io # use skimage for saving tiffs
        skimage.io.imsave(o,im,check_contrast=
    False) # save as 32-bit tiff

    # read the latest frame from camera and save one channel as a tiff image
    def save_channel_from_camera(o='C:/users/oni/desktop/channel.tif', c=0):
        raw=camera.GetLatestImage()
    # get the latest frame from camera
        p=raw.Channel(c).Pixels
    # read the pixels of channel c
        h=raw.Channel(c).Dims.Height
    # read the height of channel c
        w=raw.Channel(c).Dims.Width
    # read the width of channel c

        # import numpy as np # import numpy if not done
        im=np.array(p).reshape((h,w))
    # cast to a numpy.array
       
    import skimage.io # use skimage for saving tiffs
        skimage.io.imsave(o,im,check_contrast=
    False) # save as 32-bit tiff

    # read the latest frame from camera and save one mapped channel as a tiff image
    def save_mapped_channel_from_camera(o='C:/users/oni/desktop/mapped_channel.tif', c=0):
        raw=camera.GetLatestImage()
    # get the latest frame from camera
        p=raw.MappedChannel(c).Pixels
    # read the pixels of channel c
        h=raw.MappedChannel(c).Dims.Height
    # read the height of mapped channel c
        w=raw.MappedChannel(c).Dims.Width
    # read the width of mapped channel c

        # import numpy as np # import numpy if not done
        im=np.array(p).reshape((h,w))
    # cast to a numpy.array
       
    import skimage.io # use skimage for saving tiffs
        skimage.io.imsave(o,im,check_contrast=
    False) # save as 32-bit tiff

  2. How to fetch a frame from the focus camera and save it as an image
    def save_frame_from_focus_camera(o='C:/users/oni/desktop/mapped_channel.tif'):
        raw=focus_cam.GetLatestImage()
    # get the latest frame from the focus camera
        im=nim_image_to_array(raw)
    # cast to a numpy.array
       
    import skimage.io # use skimage for saving tiffs
        skimage.io.imsave(o,im,check_contrast=
    False) # save as 16-bit tiff

  3. How to control the lasers
    # turn on a laser at a percentage
    def turn_on_laser_percentage(l=0,p=30):
        light.GlobalOnState=
    True # set global laser state to true
        light[l].PercentPower=p
    # set laser l to p%
        light[l].Enabled=
    True # turn on laser l
    # turn off a laser
    def turn_off_laser(l=0):
        light[l].Enabled=
    False # turn off laser l
        light[l].PercentPower=
    0 # set laser l to 0%
    # turn on the focus laser
    def turn_on_focus_laser(p=100):
        light.FocusLaser.PercentPower=p
    # set focus laser at p%, the focus laser can also be access using light[5]
        light.FocusLaser.Enabled=
    True # turn on focus laser
    # turn off the focus laser
    def turn_off_focus_laser():
        light.FocusLaser.Enabled=
    False # turn off focus laser
    # read the laser information, such as power in mW
    def read_laser(l=0):
        wavelength=light[l].Wavelength
    # get wavelength of laser l
        power=light[l].PowerW*
    1000 # get current laser power in mW
        print(
    'current power of laser '+str(wavelength)+' is '+str(power)+'mW') # print

  4. How to move the stages
    # move a stage to a position
    def move_stage_to_position(a=0,p=100):
       
    if a==0: # x-stage
            axis=stage.Axis.X
       
    elif a==1: # y-stage
            axis=stage.Axis.Y
       
    else: # z-stage
            axis=stage.Axis.Z
        stage.RequestMoveAbsolute(axis,p)
    # move the stage by p um
       
    # import time # import time if not done
       
    while stage.IsMoving(axis): # wait until the stage is moved
            time.sleep(
    0.01)
        position=stage.GetPositionInMicrons(axis)
    # read current position in um
        print(
    'requested stage position is '+str(p/1000.0)+'mm'# print
        print(
    'current stage position is '+str(position/1000.0)+'mm'
    # print

  5. How to control the LEDs
    # turn on an LED at a percentage
    def turn_on_led_percentage(l=0,p=0.5):
        led=instrument.TransilluminationControl
       
    if l==0: # for blue 465nm
            led.SetRingColour([
    0,p,0]) # set LED power to p%
       
    elif l==1: # for green 520nm
            led.SetRingColour([
    0,0,p]) # set LED power to p%
       
    else: # for red 620nm
            led.SetRingColour([p,
    0,0]) # set LED power to p%
    # do whatever you would like to
        led.Enabled=
    False # turn off LED

  6. How to control the illumination angle

    # moves the illumination angle
    def set_illumination_angle(angle=0):
        illum_angle.RequestMoveAbsolute(angle)
    # set the illumination angle

        # import time # import time if not done

        time.sleep(1) # wait at least 1s for illumination angle to change, code execution and stage translation are run on different threads

        current_angle=illum_angle.GetPositionInDegrees()  # get the current illumination angle

        print('requested illumination angle is '+str(angle)+' degrees'# print
        print(
    'current illumination angle is '+str(current_angle)+' degrees'
    # print


  7. How to control the temperature

    # set and enable temperature control, default to 31C
    def set_temperature_control(target_temperature=31):
        temperature.TargetTemperatureC=target_temperature
    # set the temperature in C

        temperature.ControlEnabled=True # enable temperature control

        current_temperature=temperature.CurrentTemperatureC # read the current temperature in C

        print('requested temperature is '+str(target_temperature)+'C'# print

        print('current temperature is '+str(current_temperature)+'C'# print


    # disable temperature control 

    def disable_temperature_contol():

        temperature.ControlEnabled=False # disable temperature control

  8. How to acquire an image

    # start an acquisition immediately

    def acquire_images(output_folder,output_filename,total_num_frames):

        # set up lasers and camera

        acquisition.SaveTiffFiles=True  # enable saving tiffs

        acquisition.Start(output_folder,output_filename,total_num_frames) # start the acquisition immediately

        # import time # import time if not done

        while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed

            time.sleep(0.1)

        while data_manager.IsBusy: # wait until the data has been processed

            time.sleep(0.1)


  9. How to run an overview scan
    The overview scan module is not natively supported by PythONI, so you will have to create your own scanning recipe. An example of snake scanning is provided here:
    # read a channel from camera
    def read_camera(c=0):
        im=camera.GetLatestImage()
        p=im.Channel(c).Pixels
        h=im.Channel(c).Dims.Height
        w=im.Channel(c).Dims.Width
       
    return np.array(p).reshape((h,w))

    # be sure to turn on lasers and set focus reference before overview scan!!!
    camera.SetTargetExposureMilliseconds(
    30) # set camera exposure to 30ms
    pixel=calibration.ChannelMapping.GetLatestMapping().pixelSize_um
    # read the pixel size
    xpos=stage.GetPositionInMicrons(stage.Axis.X)
    # set the origin to current position
    ypos=stage.GetPositionInMicrons(stage.Axis.Y)
    channel=
    0 # set the channel to scan
    tile=[
    10,10] # set row and column [row, col]
    x_positions=[]
    # initial x positions
    y=positions=[]
    # initial y positions
    width=camera.GetLatestImage().Channel(
    0).Dims.Width*pixel # get channel width in um
    height=camera.GetLatestImage().Channel(
    0).Dims.Height*pixel # get channel height in um
    # generate the position list for a snake scan
    for j in range(tile[0]): # for y, height, row
        i=
    0
        factor=
    1
       
    for i in range(tile[1]): # for x, width, column
            factor=
    1 if j%2==0 else -1 # update the factor separately for odd and even rows
            x_positions.append(xpos+i*width*factor)
            y_positions.append(ypos)
        xpos=xpos+i*width*factor
    # update the origin for a new row
        ypos=ypos+height

    I=[]
    # initial the frames
    for i in range(tile[0]*tile[1]):
        stage.RequestMoveAbsolute(stage.Axis.X,x_positions[i])
    # move the stage to position
        stage.RequestMoveAbsolute(stage.Axis.Y,y_positions[i])
       
    # import time # import time if not done   
       
    while stage.IsMoving(stage.Axis.X) or stage.IsMoving(stage.Axis.Y): # wait until the stage is moved
           
    if autofocus.CurrentStatus is not autofocus.Status.FOCUSING_CONTINUOUS: # check z-lock
                print(
    'z-lock inactive')
        time.sleep(
    0.01)
        I.append(read_camera(channel))

    # do whatever you would like to, for example, image stitching

  10. How to create a light program
    Note that PythONI currently does not support group setting and manual input is needed.
    from NimDotNet import LightProgram # import library for light program
    lp=LightProgram([[[
    0,0,0,50]],[[0,0,50,0]]]) # create a two-step light program
    # light program: [ [step 1], [step 2], ... ]
    # step: [ [state 1], [state 2], ... ]
    # state: [laser 1, laser 2, laser 3, laser 4]
    lp.Step[
    0].Repeats=1000 # set the first step to repeat 1000 times
    lp.Step[
    1].Repeats=1000 # set the second step to repeat 1000 times
    light.Program=lp 
    # set the light program
    light.ProgramActive=
    True # activate the light program

  11. How to perform a multi-acquisition acquisition (load from a JSON file)

    # load a multiacquisition from a JSON file
    def load_multi_acquisition(input_filepath):

        multi_acquisition=instrument.MultiAcquisitionControl # create an instance of the multiacquisition object

        if not multi_acquisition.LoadConfigurationFromFile(input_filepath): # load a multiacquisition recipe from a JSON file

            print('failed to load the multiacquisition configuration from file')

        else:

            multi_acquisition.Start() # start the multiacquisition

            # import time # import time if not done

            while multi_acquisition.IsRunning: # wait until the multiacquisition has completed

                time.sleep(0.1)


Manual

PythONI has pre-defined some objects to facilitate access to hardware and data control. Note that some of the functions and variables are only in the development build and only the ones that users have access to are summarized here:
  1. instrument
    # methods
    instrument.Connect() # connect to the microscope
    instrument.ConnectToSimulatedHardware() # connect to the a simulated hardware
    instrument.Disconnect() # disconnect from the microscope
    instruments=instrument.GetAvailableInstruments() # get the available microscope(s) to a system.string[], use print(instruments[0]) to print the SN of the first available microscope
    instrument.SelectInstrument(instruments[0]) # select an instrument (id) to connect

    # properties
    autofocus=instrument.AutoFocusControl # create an instance of the autofocus object
    camera=instrument.CameraControl # create an instance of the camera object
    focus_cam=instrument.FocusCameraControl # create an instance of the focus camera object
    illum_angle=instrument.IlluminationAngleControl # create an instance of the illumination object
    flag=instrument.IsConnected # check to see if the microscope is connected
    light=instrument.LightControl # create an instance of the laser object
    stage=instrument.StageControl # create an instance of the stage object
    temperature=instrument.TemperatureControl # create an instance of the temperature object
    led=instrument.TransilluminationControl # create an instance of the LED object

  2. light
    # methods
    flag=light.IsLaserPresent(
    0) # check to see if the UV laser is available, the laser order is [0,1,2,3,4,5] -> [UV, blue, green, red, iSCAT, focus laser]

    # properties
    focus_laser=light.FocusLaser
    # create an instance of the focus laser object
    flag=light.FocusLaserOn
    # check to see if the focus laser is on, can also be set to True to turn on the focus laser
    light.GlobalOnState=
    True # enable the laser global on state, equals to Enable Active Lasers/Disable Lasers
    light[
    0].Enabled=True # enable the first laser
    light[
    0].PercentPower=30 # set the power of the first laser to 30%
    value=light[
    0].PowerW # get the current power of the first laser in W
    value=light[
    0].PowerDensitykWcm2 # get the current power density of the first laser in kWcm2
    value=light[
    0].Wavelength # get the wavelength of the first laser in nm
    flag=light.IsConnected 
    # check to see if the lasers are connected
    flag=light.IsFocusLaserPresent
    # check to see if the focus laser is available
    value=light.NumLasers
    # check how many imaging lasers are available, including UV, blue, green, and red
    light.Program=lp
    # set the light program to lp (NimDotNet.LightProgram), where lp should be defined in advanced for example using lp=LightProgram([[[0,0,0,50]],[[0,0,50,0]]])
    light.ProgramActive=
    True # enable the light program, equals to Enable Light Program

  3. stage
    # properties
    axis=stage.Axis(
    0) # get the id of the first stage, which can be used for controlling
    value=stage.GetMaximumInMicrons(stage.Axis(
    0)) # get the positive limit of the first stage in um
    value=stage.GetMinimumInMicrons(stage.Axis(
    0)) # get the negative limit of the first stage in um
    value=stage.GetPositionInMicrons(stage.Axis(
    0)) # get the current position of the first stage in um
    flag=stage.IsConnected(stage.Axis(
    0)) # check if the first stage is connected
    flag=stage.IsInitializing(stage.Axis(
    0)) # check if the first stage is initializing
    flag=stage.IsMoving(stage.Axis(
    0)) # check if the first stage is moving, which must be used to identify the stop point of each translation
    stage.RequestMoveAbsolute(stage.Axis(
    0),100) # move the first stage to +100um
    axis=stage.Axis.X
    # alternative way to access the id of the first stage

  4. illum_angle
    # methods
    flag=illum_angle.Connected
    # check if the TIRF stage is connected
    value=illum_angle.CurrentPositionInDegrees
    # get the current illumination angle in degree
    value=illum_angle.GetMaximumDegrees()
    # get the maximum illumination angle in degree
    value=illum_angle.GetMinimumDegrees()
    # get the minimum illumination angle in degree
    illum_angle.RequestMoveAbsolute(
    53) # set the illumination angle to 53 degrees

  5. temperature
    # method
    value=temperature.CurrentTemperatureC
    # get the current temperature in C

    # properties
    temperature.ControlEnabled=
    True # enable the temperature control
    temperature.TargetTemperatureC=
    37 # set the target temperature to 37 C

  6. camera
    # methods
    flag=camera.GetDeviceState()
    # get camera state, can be [DeviceState.UNINITIALIZED, DeviceState.INITIALIZING, DeviceState.CONNECTED, DeviceState.VIEW_STARTING, DeviceState.VIEW_ACTIVE, DeviceState.VIEW_STOPPING]
    flag=camera.GetAcquisitionState()
    #
    camera.BeginView()
    # start the camera live view
    camera.StopView()
    # end the camera live view
    value=camera.GetExposureTimeMilliseconds()
    # get the exposure in ms
    value=camera.GetFramesPerSecond()
    # get the frequency in Hz
    camera.SetTargetExposureMilliseconds(
    10) # set the exposure to 10ms
    camera.SetTargetFramesPerSecond(
    100) # set the frequency to 100Hz
    image=camera.GetLatestImage()
    # get the latest frame from the camera
    value=camera.GetROIHeight()
    # get the height of the current ROI, default to 1024
    value=camera.GetROIWidth()
    # get the width of the current ROI, default to 1024
    value=camera.GetROIOffsetX()
    # get the x-offset of the current ROI, default to 0
    value=camera.GetROIOffsetY()
    # get the y-offset of the current ROI, default to 0
    camera.SetROI(left,top,width,height)
    # set the ROI in RECT format [x,y,width,height]
    value=camera.GetSensorTemperatureCelsius()
    # get camera temperature in C
    value=camera.NumberOfFramesWaitingInBuffer()
    # get the number of frame in the buffer
    images=camera.CreateImageSourceAndAcquire(
    100) # immediately acquire 100 frames from the camera continuously
    images=camera.CreateImageSourcePaused(
    100) # initiate the acquisition for 100 frames from the camera, which will start upon calling camera.ContinueAcquisitionFor()
    camera.ContinueAcquisitionFor(frame_count,flag_reset_view,flag_stop_view)
    # continue to acquire some frames from the camera continuously after initialization

    value=camera.MaxROIHeight # get the maximum height of the camera with binning

    value=camera.MaxROIWidth # get the maximum width of the camera with binning

  7. focus_cam
    # methods
    value=focus_cam.GetExposureTimeMilliseconds()
    # get the exposure of the focus camera in ms
    value=focus_cam.GetFramesPerSecond()
    # get the frequency of the focus camera in Hz
    image=focus_cam.GetLatestImage()
    # get the latest frame from the focus camera
    value=focus_cam.GetROIHeight()
    # get the height of the current ROI
    value=focus_cam.GetROIWidth()
    # get the width of the current ROI
    value=focus_cam.GetROIOffsetX()
    # get the x-offset of the current ROI
    value=focus_cam.GetROIOffsetY()
    # get the y-offset of the current ROI
    focus_cam.SetROI(top,left,width,height)
    # set the ROI of the focus camera in RECT format
    focus_cam.SetTargetExposureMilliseconds(
    50) # set the exposure of the focus camera in ms
    focus_cam.SetTargetFramesPerSecond(
    20) # set the frequency of the focus camera in Hz

    # properties
    flag=focus_cam.IsConnected
    # check if the focus camera is connected

  8. autofocus
    # methods
    autofocus.ClearReferencePoint()
    # clear the current focus reference
    autofocus.StartReferenceCalibration()
    # set the focus reference, equals to Set Focus Ref.
    autofocus.StartContinuousAutoFocus()
    # enable the z lock, equals to Z Lock
    autofocus.Stop()
    # stop the z lock, equals to Stop Z Lock
    flag=autofocus.HasReferencePoint
    # check if a focus reference has been set
    autofocus.QuickFocus(
    0) # perform one-time focus, equals to Focus

    # properties
    flag=autofocus.CurrentStatus
    # get the focus state, can be [autofocus.Status.NOT_RUNNING, autofocus.Status.CALIBRATING, autofocus.Status.FOCUSING_SINGLE_SHOT, autofocus.Status.FOCUSING_SINGLE_SHOT_QUICK, autofocus.Status.FOCUSING_CONTINUOUS]
    value=autofocus.FocusOffsetMax
    # get the maximum z offset in um
    value=autofocus.FocusOffsetMin
    # get the minimum z offset in um
    autofocus.FocusOffset=
    1 # set the z offset in um

  9. acquisition
    # methods
    flag=acquisition.State
    # get the acquisition state, can be [acquisition.AcquisitionState.NOT_ACQUIRING, acquisition.AcquisitionState.ACQUISITION_ACTIVE, acquisition.AcquisitionState.ACQUISITION_PAUSED, acquisition.AcquisitionState.ACQUISITION_COMPLETING]
    acquisition.Start(
    'folder_name','filename',frame_count) # start the acquisition immediately
    acquisition.InitAcquisition(
    'folder_name','filename',frame_count) # initiate an acquisition, which will start upon calling acquisition.ContinueFor(frame_count)
    acquisition.ContinueFor(frame_count)
    # continue to acquire some frame from the camera after initialization
    acquisition.Stop()
    # stop the current acquisition, equals to Stop

    # properties
    acquisition.SaveTiffFiles=
    True # enable saving tiffs
    acquisition.RealTimeLocalization=
    True # enable real-time localization
    flag=acquisition.IsAcquiring
    # check if the current acquisition is running
    flag=acquisition.IsActiveOrCompleting
    # check if the current acquisition is completing

  10. calibration
    # methods
    channel_mapping=calibration.ChannelMapping
    # create an instance of the channel mapping object
    calibration.ChannelMapping.BeginCalibration(
    20,2000,5,10,True) # start the channel mapping with default setting, arguments are [max_fov, target_number_points, max_pixel_distance_between_channels, exclusion_radius_between_channels, use_zlock]
    value=calibration.ChannelMapping.GetLatestMapping().pixelSize_um
    # get the pixel size in um
    value=calibration.ChannelMapping.GetLatestMapping().stDevSingleAxisAbsoluteErrors
    # get the standard deviation of errors of the latest channel mapping in pixels
    value=calibration.ChannelMapping.GetLatestMapping().proportionCoverage
    # get the point coverage of the latest channel mapping in [0.0, 1.0]
    calibration.ChannelMapping.SaveLatestCalibration()
    # save the latest channel mapping, equals to Save Mapping
    calibration.ChannelMapping.SaveDefaultMapping()
    # save the default channel mapping, equals to Reset Mapping To Default

  11. data_manager
    # methods
    data_manager.LoadData(
    "absolute_file_directory") # load a data from directory, can be tif or locb
    data_manager.Clear()
    # clear the data

    # properties
    value=data_manager.BinaryFiles
    # get the binary files in system.string in the order [0, 1] -> [locb, nimb], use value[0] to access the locb file
    value=data_manager.Directory
    # get the absolute directory of the current data
    value=data_manager.Files
    # get some files in system.string in the order [0, 1, 2] -> [tif, locb, nimb]
    value=data_manager.ImageFiles
    # get the tiff file from the current data
    flag=data_manager.IsAcquiring
    # check if the data manager is acquiring images
    flag=data_manager.IsBusy
    # check if the data manager is busy
    flag=data_manager.IsEmpty
    # check if the data manager is empty
    flag=data_manager.IsLoading
    # check if the data manager is loading data
    flag=data_manager.IsProcessing
    # check if the data manager is processing data
    locs=data_manager.Localizations
    # create an instance of the localization object
    value=data_manager.RawImages.TotalFrames
    # get the number of frames from the current data
    value=data_manager.RawImages.NChannels
    # get the number of channels from the current data
    value=data_manager.RawImages.AcqData.JSON
    # read the acq.nim file of the current data
    image=data_manager.RawImages.GetChannelImage(index,channel)
    # get a channel from the current data
    image=data_manager.RawImages.GetImage(index)
    # get a frame from the current data
    image=data_manager.RawImages.GetMappedChannelImage(index,channel)
    # get a mapped channel from the current data

  12. user_settings
    # methods
    user_settings.CropTiffImages=
    True # enable cropping tiffs
    user_settings.DisableLasersAfterAcquisition=
    True # enable turning off lasers after acquisition
    user_settings.SplitFiles=
    True # enable saving tiffs as chunks of 2GB files

  13. Other useful functions:
    # print_methods
    print_methods(instrument)
    # print available methods and properties for a chosen object

    # nim_image_to_array
    image=nim_image_to_array(data_manager.RawImages.GetImage(index))
    # convert a NimOS image to a numpy.array

Practical Examples ("Plug and Play")

  1. How to start a single FOV single-color acquisition

    # example main functions for single FOV single color imaging
    if __name__ == "__main__":
        # connect to the microscope
        # set the focus reference
        camera.BeginView() # turn on the camera
        light.GlobalOnState=True # enable laser control    

        light[3].Enabled=True # enable on red laser     light[3].PercentPower=30 # set the red laser power     frame_count=1000 # define frame count
        camera.SetTargetExposureMilliseconds(30) # set exposure
        acquisition.Start('test','test',frame_count) # start the acquisition
        while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed
            time.sleep(0.1)
        while data_manager.IsBusy: # wait until the data has been processed
            time.sleep(0.1)

  2. How to start a single FOV multi-color acquisition
    # example main functions for single FOV two color imaging
    # with light program
    if __name__ == "__main__":
        # connect to the microscope
        # set the focus reference
        camera.SetTargetExposureMilliseconds(30) # set exposure
        laser_power=[30,50] # define laser powers, for example, use the blue and red laser
        frame_count=[500,1000] # define frame counts

        from NimDotNet import LightProgram # note that it is currently not available to set the groups
        lp=LightProgram([[[0,0,0,laser_power[0]]],[[0,laser_power[1],0,0]]]) # create a 2-step light program
        lp.Step[0].Repeats=frame_count[0] # set the number of repetitions
        lp.Step[1].Repeats=frame_count[1]
        light.Program=lp # save the light program to NimOS
        light.ProgramActive=True # enable the light program

        acquisition.Start('test','test',sum(frame_count)) # start the acquisition
        while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed
            time.sleep(0.1)
        while data_manager.IsBusy: # wait until the data has been processed
            time.sleep(0.1)

    # without light program
    if __name__ == "__main__":
        # connect to the microscope
        # set the focus reference
        lasers=[3,1] # define the laser order, for example, use the blue and red laser
        laser_power=[30,50] # define the laser power
        frame_count=[500,1000] # define frame count
        exposures=[30,30] # define exposure
        camera.BeginView() # turn on the camera
        light.GlobalOnState=True # enable laser control
        for i in range(len(lasers)): # for each laser
            camera.SetTargetExposureMilliseconds(exposures[i]) # set the exposure
            light[lasers[i]].Enabled=True # enable the laser
            light[lasers[i]].PercentPower=laser_power[i] # set the laser power
            acquisition.Start('test','C{:01d}'.format(i),frame_count[i]) # start the acquisition
            while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed
                time.sleep(0.1)
            while data_manager.IsBusy: # wait until the data has been processed
                time.sleep(0.1)

  3. How to start a multi-FOV multi-color acquisition
    # example main function for two FOV two color imaging
    if __name__ == "__main__":
        # connect to the microscope
        # set the focus reference
        positions=[[0,0,0],[100,100,0]] # define the positions in [x,y,dz] in um
        lasers=[3,1] # define the laser order, for example, use the blue and red laser
        laser_power=[30,50] # define the laser power
        frame_count=[500,1000] # define frame count
        exposures=[30,30] # define exposure
        camera.BeginView() # turn on the camera
        light.GlobalOnState=True # enable laser control
        for j in range(len(positions)): # for each position
            for i in range(len(lasers)): # for each laser
                autofocus.FocusOffset=positions[j][2] # move z stage offset
                stage.RequestMoveAbsolute(stage.Axis.X,positions[j][0]) # move x stage
                stage.RequestMoveAbsolute(stage.Axis.Y,positions[j][1]) # move y stage
                while stage.IsMoving(stage.Axis.X) or stage.IsMoving(stage.Axis.Y) or stage.IsMoving(stage.Axis.Z):
                    time.sleep(0.01)
                camera.SetTargetExposureMilliseconds(exposures[i]) # set the exposure
                light[lasers[i]].Enabled=True # enable the laser
                light[lasers[i]].PercentPower=laser_power[i] # set the laser power
                acquisition.Start('test','P{:01d}'.format(j)+'C{:01d}'.format(i),frame_count[i]) # start the acquisition
            while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed
                time.sleep(0.1)
            while data_manager.IsBusy: # wait until the data has been processed
                time.sleep(0.1)

  4. How to start a confocal acquisition
    # example main function for single FOV confocal imaging
    if __name__ == "__main__":
        # connect to the microscope
        # set the focus reference
        instrument.ImagingModeControl.CurrentMode=instrument.ImagingModeControl.Mode.Confocal # switch on confocal mode
        light[1].Enabled=True # enable the blue laser
        light[1].PercentPower=30 # set the blue laser power to 30%
        instrument.ImagingModeControl.SetTargetExposureMilliseconds(33) # set the confocal exposure
        instrument.ConfocalController.SetLineSpacing(7) # set the line spacing
        acquisition.Start('test','confocal',1) # start the confocal acquisition
        while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed
            time.sleep(0.1)
        instrument.ImagingModeControl.CurrentMode = instrument.ImagingModeControl.Mode.Normal # switch back to normal mode

  5. How to start a 3D acquisition
    # example main function for single FOV 3D imaging
    if __name__ == "__main__":
        # connect to the microscope
        # set the focus reference
        instrument.ImagingModeControl.CurrentMode=instrument.ImagingModeControl.ThreeD # switch on 3D mode
        light[1].Enabled=True # enable the blue laser
        light[1].PercentPower=30 # set the blue laser power to 30%
        camera.SetTargetExposureMilliseconds(30) # set the exposure time
        acquisition.Start('test','3D',1) # start the 3D acquisition
        while acquisition.IsActiveOrCompleting: # wait until the acquisition has completed
            time.sleep(0.1)
        while data_manager.IsBusy: # wait until the data has been processed
                time.sleep(0.1)
        instrument.ImagingModeControl.CurrentMode = instrument.ImagingModeControl.Mode.Normal # switch back to normal mode


Discussion

  1. Programming with PythONI is not limited only to this instruction and we encourage our users to try all kinds of combinations. For example, you may integrate a hardware control (e.g. a pump system) into the NimOS automation. Note that the current NimOS python version is 3.6.6 and an upgrade is not recommended at this moment.
  2. The current PythONI interpreter does not come with a debugger. For debugging PythONI codes, it is recommended to install a python IDE that comes with a debugger, such as Spyder, in the NimOS environment. Once the virtual environment is activated, for example, you can do "pip install spyder". Note that the python interpreter needs to be changed to the NimOS one from the IDE.
  3. The NimOS GUI has some interactions with PythONI and therefore clicks or modifications to the GUI could crash NimOS if a PythONI code is running. It is recommended to not change anything from the NimOS GUI during execution.

If at any point you are having issues with the coding process, please do not hesitate to contact the CX team.









    • Related Articles

    • Function: FRET

      The FRET function allows measuring distances at a single-molecule scale. For video instructions on how to use the FRET function, check here. If at any point you are having issues with the FRET process, please do not hesitate to contact the CX team.
    • Function: ROI

      The ROI function allows cropping a smaller area for imaging and recording: Draw ROI: draw to select the area (rectangle). Clear ROI: clear the current selection. Instructions Click on ROI to expand the tab. Click on Draw ROI and draw a rectangle on ...
    • Function: 3D

      The 3D function uses a cylindrical lens to introduce a specific astigmatism pattern for 3D super-resolution imaging (not applicable to confocal or widefield). For detailed instructions on how to perform 3D Mapping Calibration, check here. ...
    • Function: TIRF

      The TIRF function allows increasing the signal-to-noise ratio for super-resolution imaging. By default, the illumination on the Nanoimager is the same as epifluorescence wide-field microscopy. This corresponds to a degree of 0, in which light is ...
    • Function: Confocal

      The Confocal function allows the acquisition of high-resolution confocal images, which is useful when dSTORM is not viable: Enable/Disable Confocal: enable/disable the confocal mode. Enable Navigation Mode: allow to quickly navigate the sample with ...