Lightsheet Microscopy – Real-time observation of biological processes

Light Source Modulation – Why synchronisation with the camera matters

Light Source Modulation

Solid State Light sources in fluorescence imaging – why synchronisation with the camera matters?

There are numerous technical benefits of using fast switching solid-state (LED and laser diode) illuminators in live-cell imaging. These include higher stability, increased lifespan, lack of vibration, lower power consumption and reduced physical size. More of this to follow, but in this note I will focus only on the ability of the light source to be precisely slaved to the output of an imaging camera with the camera clock, auxiliary control card, or, software, acting as the (timing) master. In order for this to work effectively the digital modulation rate of the light source should be at least 10KHz (100usec response), which is an order of magnitude beyond what can be achieved with a traditional source.

Mercury, xenon and metal halide lamps have historically been used as the light source of choice for biological micro-fluorescence measurements. These sources are broad spectrum with very high point intensity, and can be switched on and off and varied in intensity and wavelength using mechanical shutters, filter wheels, or galvanometers in the time domain of several milliseconds or tens of milliseconds.

Scientific CCD, Electron Multiplied CCD or CMOS cameras typically have a suitable ‘fire’ or ‘expose out’ connector and any well-designed LED, diode laser or solid-state phosphor-base microscopy light source should have a digital TTL “shutter” external input. So what are the benefits of directly interfacing the camera with the light source?

To minimise phototoxicity and photobleaching

Unfortunately for biologists, (most) living cells have not evolved to thrive on being exposed to large dosages of ultraviolet or visible radiation, and neither have genetic indicators or organic dyes. Perversely the laws of physics dictate that the fundamental shot noise, which limits the dynamic range of measurable signal changes, is dictated by the square root of the number of detected photons. This conflicting requirement, to both minimise and maximise light levels, makes it highly desirable to restrict the light exposure of the specimens to the precise intervals when the camera sensor is detecting useful signal. How much this helps to reduce photo damage will depend on the sensor type and the timing of the experiment, but the benefit will be significant in most cases. By simply connecting the camera expose output to the light source external input then this is achieved without making any demands on the software. For solid-state illuminators with different emitters supplying different wavelengths then it is useful to have second digital input to each channel so that the illumination can be controlled by software in parallel with the camera exposure. We provide such additional inputs as standard on our OptoLED and LaserBank illuminators, and can also supply auxiliary boxes to simplify operation with any modullable light source.

To avoid timing artefacts

Depending on the camera architecture it may be necessary to switch off the light during part of its cycle in order to avoid readout artefacts. With conventional, interline, CCDs the charge is transferred almost instantaneously from each photosensitive pixel row to an adjacent masked storage row. This enables the sensor to be rapidly “cleaned”, and also minimises the overhead associated with overlap readout modes, so artefacts from continous light exposure do not typically occur. In a frame transfer configuration however, which includes most Electron Multiplied cameras, the charge is shunted as a block across the active area of the sensor to an equivalent storage area once per cycle. If photons are allowed to reach the detecting area during this transfer period then they will be inappropriately read out to the wrong rows. This will manifest itself as streaking across the image in the direction of the read out. This transfer period can last for up to a millisecond or longer which could be a significant portion of the frame integration period. Where this would matter most is if there is a large intrascene dynamic range. This is the characteristic of fluorescence measurements, especially if trying to simultaneously image areas of strong and weak signal (e.g. dentrites and soma or cell body and membrane).

The situation with the sCMOS cameras is somewhat more complicated as these typically run in rolling shutter mode where the readabout of individual rows is not simultaneous. If exposing a rolling shutter camera to a continuous light source then timing artefacts are always a risk, as there is a progressive delay (for up to 10msec) between the acquisition of central and edge rows.

For single channel biological imaging artefacts will probably only be noticeable when recording fast dynamics over a large field of view (as timing discrepancies are a function of how far the pixel rows are displaced from each other). However, if in doubt the safest approach is to modulate the light source so that it is only on when all the rows are being exposed simultaneously, i.e. during the virtual global shutter period. This reduces the acquisition down to below the maximum rates and necessitates a brighter light source as the illumination duty cycle is significantly reduced.

Most sCMOS camera operate at 100Hz at full resolution (10msec exposure). This assumes that the pixels are being continuously read out so there is a negligible “global” shutter period. If the light source is at least 10 times brighter than needed for continuous illumination then increasing the exposure time to 11msec allows equivalent images using 1msec of 10X intensity exposure per frame. This will remove readout artefacts without significantly slowing down acquisition rates.

sCMOS cameras provide digital outputs for both the entire exposure time and for the global period so timing artefacts can be investigated and eliminated (at the expense of the acquisition rates or the need for a brighter light source) if necessary.

To prevent crosstalk between multi-wavelength images

Multi-wavelength or other multichannel image streaming presents an additional challenge for rolling shutter sCMOS sensors. In this case images n and n+1 can be completely different to each other so the non-simultaneous exposure of different rows becomes a serious problem. Using the “Exposure All Rows” or equivalent output of the camera to modulate the light source prevents crosstalk between channels, but for high speed streaming it is also necessary to have a hardware timing signal to progress to the next wavelength on sequential exposures (a software timed signal is unlikely to be fast or accurate enough). Some light sources include this option, or it can also be achieved with an additional control box; please contact us for advice.

To simplify software and minimise delays caused by other devices

Simply use a camera to directly modulate a solid state light source reduces complexity and potential delays in the software and will serve to maximise the benefits of reduced photon exposure to the sample. If the software or hardware is capable of generating additional signals during the camera exposure time then further speed improvements can be attained by removing the overheads associated with other devices. For example, assuming that a mechanical filter wheel has a transition time of 30msec then by streaming the camera and “shuttering” the light source during transition time for acquisition control can be conducted with a parallel flow rather than with sequential instructions. By using a filter wheel in a continuous spinning mode and/or a piezo in a continuous scanning mode with a modulated light source yet further speed benefits can be achieved by pulsing the light source in synchrony with the movement.

To improve time resolution

Although the camera readout speed will determine the repeat rate between sequential frames, a modulated light source can independently allow much shorter exposures and hence reduce blurring in dynamic recordings. sCMOS cameras allow this functionality directly (using the expose all rows output), but the same benefits can be achieved using an interline, EMCCD other frame transfer camera with a simple timing pulse to reduce the illumination duty cycle to be less than the entire camera exposure time.

Why can LEDs benefit from optical feedback?

Optical Feedback

Why can LEDs benefit from optical feedback?

The steady improvements in light-emitting diode (LED) technology have made these devices increasingly suitable for both illumination and fluorescence excitation in microscopy – and indeed for macroscopical applications too. Compared with incandescent and arc lamp sources, they run much cooler and are inherently more stable. Although what we refer to in Cairn as their “point intensity” (by which we mean their radiant intensity per unit area of the source) still may not be as good as for some arc lamps, it’s still way higher than for incandescent sources. That’s important for efficient illumination in microscopy. And although a given LED has only a limited spectral range, there is a wide enough choice to cover the entire optical spectrum and the near infrared and ultraviolet too, and their outputs an easily be combined by use of appropriate chromatic reflectors, or dichroic mirrors as they are usually misnamed! Furthermore, unlike those other sources, they can be switched on and off on timescales in the nanosecond range. So what is there not to like?

The potential performance shortcoming becomes apparent when you begin to exploit their fast switching ability. Different LED wavelengths are produced by slightly different technologies, which means that their propensity to this particular shortcoming is greater for some wavelengths than others. The accompanying oscilloscope screen shots should make this clear.

Optical feedback 1
Optical feedback 2

The yellow trace shows a pulsed current going through the LED, and the blue trace is the optical output, for a 505nm and 590nm LED respectively. The timescale is 5msec per division. One can see that in both cases the optical output declines during the “on” period. The effect isn’t too bad for the 505nm LED, but for the 590nm one it’s quite serious. So what is going on here?

This is actually a temperature effect, caused by the tendency for LEDs to become less efficient as they get hotter, but it’s worth explaining in much more detail than that, because the physics here is frequently misunderstood. Although the light from an LED is “cold”, in that it is not thermally emitted, it’s not a completely efficient process, so inevitably some heat is generated as well. It’s the generation of this heat, and how efficiently it can be removed, that limits the amount of light that an LED can be safely made to produce. Like most other semiconducting devices, the maximum safe junction temperature (which in this case is the light-emitting one of course) is limited to about 150 degrees C before it wings its way to semiconductor heaven. Clearly the more heat we can conduct away, the more current we can pass, and hence the more light we can generate. Therefore these devices – and indeed any other “power” semiconductors – have a metal surface to which a heat sink of some form or other can be attached.

The source of possible misunderstanding is that the temperature of this metal surface is not the LED junction temperature! It would be great if that were so, but there is inherently a thermal resistance between the junction and the case, so the junction temperature will always be higher, and perhaps substantially so. Clearly the LED designers strive to reduce this resistance, but they have their own laws of physics to contend with here. As a general rule, the maximum rating of a power semiconductor is quoted for a case temperature of 25 degrees, so if the maximum safe junction temperature is 150 degrees, then the temperature difference between junction and case is a massive 125 degrees. That difference and the thermal resistance between junction and case sets the rate at which heat can be removed, so if we want to double the power handling by having a 250 degree temperature gradient instead, we’d need to get the case temperature down to -100 degrees! Therefore, although it makes sense to keep the case temperature as low as we sensibly can, this doesn’t help nearly as much as you might think. Put the other way, a case temperature of 50 degrees (getting quite warm) would reduce the power rating by only 100/125, or 20%.

Oscilloscope traces

Now let’s get back to those oscilloscope traces. Since the sag is a temperature effect, we can actually use it as a measure of the LED junction temperature, so this is telling us that the junction temperature changes on a timescale of some milliseconds when the current is changed. Two important things follow. First, it should be clear that no way can we can we get rid of this effect by regulating the case temperature – the required temperature fluctuations would be enormous and the timescales are all wrong anyway. Second, and perhaps rather more usefully, it’s giving us some useful information on the thermal capacity of the LED junction and its immediate environs. Since the junction temperature doesn’t change immediately with the current, this means that we can pass higher currents for shorter periods without sending the device heavenwards, on a timescale revealed (in this case milliseconds) by the temperature effect. That can be very handy!

Even though this is all very useful information, the effect is nevertheless a potential pain, so that’s where optical feedback comes in. Take a look at these next two traces.

Optical feedback 3
Optical feedback 4

These are for the same two 505 and 594nm LEDs, both with optical feedback this time, and the traces are now perfectly square. The thermal effect has a timecourse on the order of milliseconds, whereas the optical feedback can be applied on a microsecond timescale, which is sufficiently faster to deal with it completely. All we need to do this is a photodiode and amplifier that looks at just a fraction of the light from the LED, which it can do by being “out of the way” of the main optical pathway, so there is no loss of useable light. This is actually so easy to do that we incorporated it in our original OptoLED design way back in 2003, and it’s been a key feature of that product ever since.

However, there are two potential traps for the unwary! First, in order to “square up” the optical output, the current during a light pulse is going to increase somewhat, so one must be careful not to cause this to overdrive the LED. Some sort of protection circuit is therefore required, although in practice we need something like that anyway.

Spectral output in relation to temperature

The second trap is a rather more insidious one. This is that the spectral output of an LED can change with temperature, or possibly with the current itself. It’s always advisable to use an LED with cleanup filters, to block any emissions that are outside the required waveband, so now one must ensure that the photodiode is seeing the same spectral range as the filtered output, and that means siting the photodiode downstream of the filtering. We found this to be particularly important when using a 365nm LED with a filter to select out the spectral range around 340nm for excitation of the calcium indicator fura2. But since we had actually anticipated the possible need for this, it was rather nice to find an application for which it was actually necessary!

But finally, optical feedback isn’t useful just on short timescales. If you’re not pulsing an LED, you might think that optical feedback isn’t really going to be useful, but we have had people needing stable optical outputs over periods of days or even longer, so for them it’s been very reassuring to know that optical feedback will guarantee that too. It’s not just for the short events in life….

How do you make a filterwheel step more quickly?

Filterwheel step more quickly

Well, the short answer is of course to use a more powerful motor, but there’s potentially a lot more to it than that. This note is to describe the various other approaches that can be taken, and which we ourselves have done in the design of our Optospin system. Ultimately it’s all down to the laws of physics. Although they can’t be broken, they can be bent a little here and there. Let’s see what we can do!

Spinning & Stepping

The first point to consider is the difference between spinning and stepping, since if continuous spinning can form a satisfactory solution, the problems pretty much go away. It’s a potentially sobering thought that there are star-sized objects out there (pulsars) that are happily spinning away at speeds of up to hundreds of revolutions per second, which should be fast enough for most filterwheel applications, so size just shouldn’t be a limiting factor here. Admittedly there is the small problem of how to generate the energy to get something so big to rotate so quickly in the first place, but the point is that it can be done, and once you’re up to speed (and ignoring frictional effects for a filterwheel and relativistic effects for a pulsar), the further energy requirements to keep it going are relatively small.

In practice the potential problems with spinning a filterwheel continuously are how to synchronise other equipment to it. Ideally we would like or may even need to send light through the wheel only when it is reasonably aligned with a particular filter position, and for the detector (usually a camera) and its associated data-capture software to be synchronised to these specific events. Although this may not sound straightforward, we have in fact already done it all, as explained in Jez’s application note on the subject. The basic rule of thumb is that as soon as you want to acquire more than just a few images per second, this is likely to be the best way to go!

A continuous spin mode was incorporated in the Optospin design from the start. It can either generate the required spin frequency itself, to which the other equipment can be synchronised, or it can synchronise to an externally-generated control frequency, so there is considerable felixibility on how the various components of the system can be co-ordinated with each other. The important point is that this can all be a lot easier to achieve than you may think – or it will be if you talk to us!

Another advantage of the continuous spin mode is that there is going to be a lot less vibration compared with the discontinuous rotations of stepping mode, although it just so happens that the stepping mode of the Optospin offers a very effective solution to this if if required. More on that solution later, but there remains the inescapable problem in stepping mode, that to go from one filter position to another, the wheel has to accelerate from one rest position and then decelerate again to come to an exact stop at another. That in itself is quite a design challenge, especially if the steps are to be as quick as the electromechanical constraints will allow, but even if this issue can be satisfactorily dealt with (as we believe we have), inevitably a reaction torque is generated as the wheel changes speed. As Martin our President (who once again has volunteered to write one of these Notes) likes to point out, the reaction torque from an accelerating or decelerating filterwheel tends to make the rest of the universe rotate in the opposite direction. Although the effect is unlikely to be noticeable at distances and object sizes approaching those of even the closest pulsar, in practice the nearest point of the rest of the universe is likely to be your microscope, which is likely to be close enough to matter. It is also going to be particularly sensitive to any sort of vibration, so this is potentially very bad news.

Rotating mass

But before we consider our specific solution, it should be obvious that we can reduce the effect by keeping the rotating mass as small as possible. However, what matters here is not just the mass itself, but rather the moment of rotary inertia, which also depends on how the mass is distributed. For fastest stepping we want to keep the inertia low in any case, since this determines how much energy we need to put into the system in order to accelerate it or decelerate it at a given rate – the lower the inertia, the faster these accelerations and decelerations can be for a given motor strength (torque). What comes next may be a bit of an eye-opener for some readers (and perhaps even filterwheel designers?)!

A fuller description of all this is given in the online Optospin manual, but basically the moment of inertia of a rotating disc of any given thickness is proportional to the fourth power of its radius! Again as explained there, the situation in practice isn’t quite that bad, as the stepping time for a given motor torque increases “only” with the square of the radius, but it remains a huge effect. So for fast stepping the solution is to stay small! In practice the size of a filterwheel is at least partly determined by the required optical apertures, for which 25mm is the usual figure in imaging applications, but then we have to consider how many filters we want to put in it.

The smallest feasible number is three, in a closepacked triangular configuration, but that leaves little space for the central axle, and basically none for a powering system. We did make a few wheels like this once though, with an electromagnetic drive system operating around the edge, but that necessarily introduced additional size and hence inertia where it was least wanted, which did rather compromise the arrangement. For the Optospin we therefore chose a six-filter configuration, which left a roughly filter-sized hole in the middle. This was big enough to accommodate a small but extremely powerful motor, of a type originally designed for electric flight, where a high power-to-weight ratio is similarly important. The rotating components of the motor (steel and rare-earth magnets) are relatively massy, so having the motor in the middle, where its contribution to the total inertia is relatively less than in that edge-driven design, is a pretty much ideal solution.

10 filters in a wheel?

But are six filter positions enough? They may not be, so designs with ten or possibly even more filters are also available fronm several manufacturers. However, they are inevitably going to incur a stepping time penalty compared with an equivalent six-filter design. The geometry basically says that once you go beyond six filters in the central-motor configuration, the required wheel radius goes up with the number of filters, so on that basis the stepping time of a ten-filter wheel is likely to be nearly three times worse than that of an otherwise equivalent six-filter one! Ouch….


By the way, this analysis assumes that the wheel is being uniformly accelerated for the first half of a step, and uniformly decelerated for the second half, as we do rather precisely for the Optospin. This gives a stepping time that is proportional to the square root of the number of filter positions travelled, so the worst-case stepping time for the Optospin, which is the three-position step to go to the diametrically opposite one, is only about 70% longer than going to an adjacent one. A simpler control system, which might give a more nearly constant speed between filter positions instead, would therefore perform relatively much worse for the more “distant” steps, so do please bear this in mind when comparing filterwheel specifications!

Clearly, keeping the wheel and the filters thin will help, but this dimension has only a linear effect on the inertia, and hence a square-root effect on the stepping time. Still worth having of course, but the radius remains the killer, so we have addressed the issue in a different way. Our solution for more than six filters is to use two wheels in series, with an open position in each, thus giving a choice of ten filters in total. If that were to require more optical space it wouldn’t be so nice, but in fact we’ve been able to design the Optospin so that the wheel is is somewhat offset in in its housing in the direction of light travel. This allows a second Optospin to fit into the same physical and hence optical space (which is it itself only 35mm) “the other way round” in an overlapping configuration, so the two units appear to be side-by-side, rather than in series as they actually are. To make this completely transparent from a user point of view, the control system (both hardware and software options) allows the two wheels to be driven just as if they were a single ten-position one, so there really is no downside to this approach.

And finally, this configuration provides a very elegant and effective solution to the “rest of the universe” problem. If the two wheels are simultaneously driven in opposite directions, which they already are in this configuration by default because of their opposite orientations, their effects on the rest of the universe rather precisely cancel. It’s both easy and very satisfying to see this in operation! A single Optospin just placed on a worksurface will twist around like an epileptic icedancer (no insult intended against such people of course, although sufferers should probably avoid that particular activity) if driven at anywhere near its full power, because of the countertorque which now mainly affects its housing. In contrast, a pair connected in this way and driven together just sits there as if nothing is going on. In fact, the situation can be made even better than this. For the best torque cancellation effect it’s important to have an equal number of filters in each wheel, so the obvious solution is to distribute them so that one wheel has filters in the odd numbered positions, and the other has them in the even ones. This means that the inertia of each wheel ends up being lower than that of a single one with all six filter positions occupied. An excellent solution to the problem all round!

Which camera to go with my microscope objective?

View Table

Objective Magnification Objective NA XY Resolution limit for 500nm light (nm) Theoretical Pixel size for
Nyquist 1/2 Raleigh (um)
R1 R6 sCMOS 4.2 sCMOS 5.5 512 EMCCD
Pixel/µm Field of view/µm Pixel/µm Field of view/µm Pixel/µm Field of view/µm Pixel/µm Field of view/µm Pixel/µm Field of view/µm
1  0.1 3050 1.53  6.45 8,800×6,600 4.54 12,200×10,000 6.5 13,300×13,300 6.5 16,600×14,000 16  12,200×10,000
1  0.25 1220 0.61 6.45 8,800×6,600 4.54 12,200×10,000 6.5 13,300×13,300 6.5 16,600×14,000 1 2,687×2,203
2  0.2 1525 1.53 6.45 4,400×3,300 4.54 6,100×5,000 6.5 6,500×6,500 6.5 8,300×7,000 2 2,687×2,203
2  0.5 610 0.61 6.45 4,400×3,300 4.54 6,100×5,000 6.5 6,500×6,500 6.5 8,300×7,000 2 2,687×2,203
4  0.2 1525 3.05 6.45 2,200×1,650 4.54  3,050×2,500 6.5 3,325×3,325 6.5 4,150×3,500 5 2,687×2,203
5  0.5 610 1.53 6.45 1,760×1,320 4.54 2,440×2,000 6.5 2,660×2,660 6.5 3,320×2,800 6 2,687×2,203
10  0.45 678 3.39 6.45  880×660 4.54 1,220×1,000 6.5 1,330×1,330 6.5 1,660×1,400 11 2,687×2,203
16  0.8 381 3.05 6.45 550×413 4.54  763×625 6.5 831×831 6.5 1,038×875 18 2,687×2,203
20  0.75 407 4.07 6.45 440×330 4.54 610×500 6.5 665×665 6.5 830×700 23 2,687×2,203
20  1 305 3.05 6.45 440×330 4.54 610×500 6.5 665×665 6.5 830×700 23 2,687×2,203
25  1.1 277 3.47 6.45  352×264 4.54 488×400 6.5 532×532 6.5 664×560 28 2,687×2,203
40  0.95 321 6.42 6.45  220×165 4.54 305×250 6.5 333×333 6.5 415×350 45 2,687×2,203
40  1.1 277 5.55 6.45  220×165 4.54 305×250 6.5 333×333 6.5 415×350 45 2,202×1,805
40  1.3 235 4.69 6.45 220×165 4.54 305×250 6.5 333×333 6.5 415×350 45 2,687×2,203
60  1.2 254 7.63 6.45  147×110 4.54 203×167 6.5 222×222 6.5 277×233 68 2,687×2,203
60  1.3 235 7.04 6.45 147×110 4.54 203×167 6.5 222×222 6.5 277×233 68 2,687×2,203
60  1.4 218 6.54 6.45  147×110 4.54 203×167 6.5 222×222 6.5 277×233 68 2,687×2,203
60  1.49 205 6.14 6.45  147×110 4.54 203×167 6.5 222×222 6.5 277×233 68 2,687×2,203
100  1.45 210 10.52 6.45  88×66 4.54 122×100 6.5 133×133 6.5 166×140 114 2,687×2,203
100  1.49 205 10.23 6.45  88×66 4.54 122×100 6.5 133×133 6.5 166×140 114 2,687×2,203
150  1.45 210 15.78 6.45  59×44 4.54 81×67 6.5 89×89 6.5 111×93 170 2,687×2,203