Tools and techniques enable automated cell nanosurgery

Instruments for automated single-cell nanosurgery elucidate fundamental processes in biology, exploiting them in early detection and treatment of disease.
08 November 2012
Nicholas Hensel, Jack Stokes, Carson Labash and Michael Schrlau

Mirroring human diversity, biological cells—though similar in composition and with shared fundamental governing processes—are yet each characterized by unique behavior, responding differently and in various degrees to their surrounding environment. However, the rich information contained within cellular diversity is lost when populations of cells are studied together. The need to study cells individually is clear if we are to develop a comprehensive understanding of fundamental cell physiology. To meet this need, we are working on nanoscale tools and automated instrumentation to enable and facilitate the study of single cells.

Figure 1 shows multifunctional carbon nanotube-based tools for cell nanosurgery that we have developed.1, 2 These nanoscale devices can transport molecules and particles into cells,2, 3 detect biomolecules inside cells,4 and measure electrophysiological changes within cells,5 all without causing damage.3, 6 Our tools have created new ways of studying cells in an effort to detect the early onset of disease and improve drug efficacy.


Figure 1. Carbon-nanotube-based cellular probes. Scanning electron microscopy images of (A) a carbon nanopipette3 and (B) a carbon nanotube-tipped endoscope.2

Cell nanosurgery procedures—for example, intracellular injection—are typically performed under an optical microscope at high magnification. Here, a user identifies target cells based on subjective visual cues and manually maneuvers a probe to those cells to perform surgical functions. These procedures are time-consuming and laborious. They are often difficult and inefficient even for skilled personnel. To make cell nanosurgery more efficient, researchers are attempting to automate cell identification by employing image processing in bright-field,7 fluorescence,8 and pseudo-3D9 microscopy. In our work, image processing and robotic control are combined to objectively determine ideal cells for nanosurgery, compute optimal interrogation points, and automatically maneuver a probe to interrogate cells at those points.10


Figure 2. Automated cell nanosurgery system. GUI: Graphical user interface.

Figure 2 depicts the general functionality of the automated system, which comprises both specialized hardware and custom software.10 Cells cultured in a Petri dish are placed under the microscope (Zeiss Axio Observer). The tip of a nanoscale probe, attached to a nanomanipulator (Eppendorf TransferMan NK2), is positioned within view. The nanomanipulator maneuvers the tips of nanoscale probes to targeted locations within cells. A camera (Point Grey Chameleon) transmits high-magnification images of the cells and cell probing procedures to a desktop computer. The computer runs software (Mathworks MATLAB) consisting of custom image processing and manipulator control algorithms. A custom graphical user interface (GUI) provides real-time visual feedback and manual or automated control of the nanomanipulator.

The goal of our image-processing algorithm is to objectively determine ideal cells for nanosurgery and compute optimal interrogation points from live, bright-field images to avoid fluorescent cell staining. The algorithm is tasked with finding the edges and nuclei of each cell in the image. The relative location of these two features enables us to target the best candidates for nanosurgery and compute ideal interrogation points.


Figure 3. Results from the vision recognition algorithm. (A) Nuclei and (B) cell edges are found using custom image-processing techniques to target specific regions within the cells.

First, the algorithm performs a segmentation routine to locate all nuclei present in the image by passing a ring-shaped filter over each pixel. Pixels with intensities above a defined threshold indicate the center of nuclei: see Figure 3(A). Using our prototype algorithm, we correctly identify approximately 40% of the nuclei in an image and are currently working to reduce the number of false positives (non-nuclei features) and false negatives (missed nuclei). Once nuclei have been identified, associated edges or membranes of the cell must also be located. Figure 3(B) shows how cell edges are found using a region growth algorithm: each nucleus expands outward, slows expansion proportional to its proximity to other nuclei, and stops where it reaches the background of the image.

The goal of our manipulator control algorithm is to allow users to easily maneuver the tips of nanoscale tools to precise locations. This is accomplished through a custom GUI, which allows the user to perform manual maneuvers or select from several programmed functions to automate the process, including Multi MoveTo (move to multiple locations in the order of selection) and Get Cell Points (move to cells identified by the image-processing algorithm).10

First, the control algorithm creates the GUI and establishes a serial port connection with the manipulator (Initialization). Then, from the GUI, the user selects manual control or one of several automated functions as described above (User Input). Last, the algorithm converts input from the user or image-processing routine into movement commands and sends them to the manipulator (Execution). Figure 4 shows an example of the Multi MoveTo command.10


Figure 4. Moving to three select points using Multi MoveTo. The user selects the order and number of interrogation points from the GUI (left, 1–3), which then sends the instructions to the manipulator (right, 4–6).

Figure 5. Results from the vision recognition algorithm. (A) Nuclei and (B) cell edges are found using custom image-processing techniques to target specific regions within the cells.

Figure 5 shows our current progress toward combining vision recognition and robotic control to automate nanosurgery. With a Petri dish of cultured cells under the microscope and the tip of a nanoscale probe positioned at the center of the field of view, the user selects the Get Cell Points command. The vision recognition algorithm finds several nuclei and their centroid (and some false positives) and then commands the manipulator to maneuver to those points.

In future work, we will focus our efforts in two areas: developing nanoscale probes of different geometries and materials to enable never-before-possible single-cell metrology and nanosurgery; and refining and expanding image-processing functionality and manipulator control algorithms for efficient cell nanosurgery. These efforts are aimed at enabling and expediting research at the single-cell level to advance the early detection and treatment of disease.


Nicholas Hensel, Jack Stokes, Carson Labash, Michael Schrlau
Nano-Bio Interface Laboratory (NBIL)
Rochester Institute of Technology (RIT)
Rochester, NY

Nicholas Hensel is a fourth-year dual-degree undergraduate student in the mechanical and electrical engineering programs at RIT. His current research interests focus on computerized system control and user interface structures.

Jack Stokes is a fourth-year undergraduate student in the Computer Engineering program at RIT. His current research interests focus on computer vision, automation, and optimization techniques.

Carson Labash is a fourth-year undergraduate student in the molecular bioscience and biotechnology program at RIT. Her current research interests focus on the effects of cell surgery and interrogation on the health of single living cells.

Michael Schrlau is an assistant professor in the Department of Mechanical Engineering at RIT and directs the NBIL. Michael earned his PhD at the University of Pennsylvania (2009) and was a research professor in materials science at Drexel University until 2011. His research focuses on integrating biology and technology at the nanoscale.


References:
1. M. G. Schrlau, E. M. Falls, B. L. Ziober, H. H. Bau, Carbon nanopipettes for cell probes and intracellular injection, Nanotechnology 19, p. 015101, 2008.
2. R. Singhal, Z. Orynbayeva, R. V. Kalyana Sundaram, J. J. Niu, S. Bhattacharyya, E. A. Vitol, M. G. Schrlau, E. S. Papazoglou, G. Friedman, Y. Gogotsi, Multifunctional carbon-nanotube cellular endoscopes, Nat. Nanotechnol. 6, p. 57-64, 2011.
3. M. G. Schrlau, E. Brailoiu, S. Patel, Y. Gogotsi, N. J. Dun, H. H. Bau, Carbon nanopipettes characterize calcium release pathways in breast cancer cells, Nanotechnology 19, p. 325102, 2008.
4. J. J. Niu, M. G. Schrlau, G. Friedman, Y. Gogotsi, Carbon nanotube-tipped endoscope for in situ intracellular surface-enhanced Raman spectroscopy, Small 7, p. 540-545, 2011.
5. M. G. Schrlau, N. J. Dun, H. H. Bau, Cell electrophysiology with carbon nanopipettes, ACS Nano 3, p. 563-568, 2009.
6. Z. Orynbayeva, R. Singhal, E. A. Vitol, M. G. Schrlau, E. Papazoglou, G. Friedman, Y. Gogotsi, Physiological validation of cell health upon probing with carbon nanotube endoscope and its benefit for single-cell interrogation, Nanomedecine 8, p. 590-598, 2012.
7. G. Becattini, L. S. Mattos, D. G. Caldwell, A novel framework for automated targeting of unstained living cells in bright field microscopy, IEEE Int'l Symp. Biomed. Imaging: Nano to Macro, p. 195-198, 2011. doi:10.1109/ISBI.2011.5872386
8. N. H. Abd Halim, A. S. Abdul Nasir, N. R. Mokhtar, H. Rosline, Nucleus segmentation technique for acute leukemia, IEEE 7th Int'l Colloq. Signal Process. Appl., p. 192-197, 2011. doi:10.1109/CSPA.2011.5759871
9. W. H. Wang, D. Hewett, C. E. Hann, J. G. Chase, X. Q. Chen, Machine vision and image processing for automated cell injection, MESA, p. 309-314, 2008. doi:10.1109/MESA.2008.4735751
10. N. Hensel, M. G. Schrlau, Computer controlled actuation of micro-manipulator via serial port with MATLAB GUI, 34th Annu. Int'l Conf. IEEE Med. Biol. Soc., 2012. Paper FrD25.19
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research