WP's HP: HOME | SloMo: Links ~ FAQ ~ Info ~ Tips ~ Data ~ Clips ~ HYCAM ~ Freq. ~ f-Calculator ~ Trigger ~ [Image RAM] | ?Help? | Deutsch
Existing digital high-speed camera are not limited by the resolution of their sensors only, but also by the read-out rate of their sensors and the data transmission rate to their storage media and by the capacity of those frame buffers. The maximum read-out and data transmission rates restrict according to the type of design the recording frequency and/or the usable area of the image sensor whereas the storage capacity of the buffer memory is responsible for the comparatively short length of the sequences. The buffer memory can be located inside the camera head or on a plug-in board of the control unit.
|
Transfer rate | nominal | max. cable length |
---|---|---|---|
Fast Ethernet | 100 Mbit/s | 12.5 MByte/s | 100 m |
Gigabit Ethernet | 1000 Mbit/s | 125 MByte/s | 100 m |
FireWire 400 (IEEE 1394a) |
~400 Mbit/s | 40 MByte/s | 4.5 m (14 m) |
FireWire 800 (IEEE 1394b) |
~800 Mbit/s | 88 MByte/s | ... (72 m) |
USB (1.1 Full Speed) |
12 Mbit/s | 1.5 MByte/s | 3 m |
USB 2.0 High Speed |
480 MBit/s | 60 MByte/s | 5 m |
Comparison of some PC interfaces (1 Bit = 1/8 Byte)
Of course, there are high-speed camera systems whose resolution
and recording frequency just permit continuous operation like a
video recorder over a longer span of time. They store their data on
tape inside the control unit or on the harddisk of a PC
directly.
Just expressed in sober numbers: A megapixel sensor, even with a
comparatively humble 8 Bit color depth per channel, generates at
1 000 frames/sec 1 Gigabyte after all, so 1 000 Megabyte,
data per second in RAW format. Thus about 1 1/4 DC ROM per second.
First this amount of data has to be transferred and then has to be
stored somewhere.
Even the Gigabit Ethernet (= 1 000 Megabit/sec), the favorite
at the moment, offers a nominal transfer rate of 125 Megabyte/sec
(1 Bit = 1/8 Byte) only. In real about 100 Megabyte/sec remain due
to a certain administrative overhead. Then the next bottle necks
are the PCI bus inside the computer with a nominal transfer rate of
132 Megabyte/sec (33 MHz at 32 Bit width), shared by all plug-in
cards, and the write rate of the mass storage medium. Continuously
100 Megabyte/sec are more than a challenge for a standard hard
disk. Therefore the introduction of the PXI bus with double clock
rate and double bus width and present that of the PCI Express bus
(PCI Express x1; 500 Megabyte/sec) with almost fourfold data
transfer compared with PCI, and RAID systems, of course. Meanwhile
PCIe2 and PCIe3 reach 5 and 8 Gigabyte/sec, resp. And affordable
flash disks (solid state disks, mass storage drives without moving
parts) complement conventional harddisk drives.
Even with »professional« interfaces for instance HD-SDI (High
Definition Serial Digital Interface) out of the studio technique,
one enjoys just gross data rates of 185 or 371 Megabyte/sec. Here
perhaps the permitted cable length (ca. 100 m) and the
connection of storage media and their prices are expensive.
With spreading of USB 3.0 (5 Gigabit /625 Megabyte per second)
something could change here, at least concerning costs.
Already at VGA resolution with 640 x 480 pixels and a frame rate
of humble 100 frames/sec more than 30 Megabyte data are sampled per
second. But that's just to handle with standard interfaces and mass
storage media.
Line/column binning
Comparison of SpeedCam read-out areas
The measurements to increase recording frequency despite of
limited read-out rate are depending of the design of the sensor.
CCD sensors rather offer a line or column reduction, e.g. in form
of binning. CMOS sensors, which are designed similar to DRAMs,
rather the reduction of read-out format.
Thus for instance SpeedCam +500/+2000/lite (CCD) use line binning,
SpeedCam 512/PRO (CCD) use column binning and SpeedCam Visario
(CMOS) uses format adjustment.
Whereas binning keeps the field of view constant, and the saving
of pixels to be read out is done by reduction of resolution, one
saves pixels by reducing the read-out area. Binning makes images
look more blurred, format adjusting makes them smaller, but with
constant quality. In the end it comes to the same thing: If one
wants the same field of view, one will have to edge or zoom closer
- the image turns pixeled. Binning does this without changing the
location.
One cannot generally say what is better. During adjustment jobs one
will appreciate to avoid changes in lens and camera settings, when
increasing the frame rate. On the other hand, however, there will
be advantages, when all the images show the same quality. In real
format adjustment is preferred: Quality counts. (This process is
also supported by the increasing use of CMOS sensors.)
Existing read out procedures for sensor are somehow similar to displaying on screens. In the easiest case half frames like in the interlaced mode of the CRT TV are used. The human eye is to slow to recognize this trick. Not until the still frame replay one perceives this deceit due to the common artifacts.
2:1 interlaced operation in frame integration mode: The first
half frame consists of all odd lines the second half frame of all
even ones. Both half frames are read out one after the other, but
are displayed simultaneously. Disadvantage: One frame in real
consists of two images with a time lap between them, which are
mixed line by line. Lines 1, 3, 5, ... e.g. are from the present
moment, whereas lines 2, 4, 6, ... yet show the image before. In
the next step the even lines receive a new image, whereas the odd
ones yet show the old one, and so on. Therefore this typical so
called comb artifacts in animated scenes, the full frame
resolution, however, is preserved.
(In case of deinterlacing for video replay comparable with
»weave«.)
2:1 interlaced operation in field integration mode: Two lines
are already summed up (binning) during the read out and are
displayed, resp. So in the first half frame line 1 and line 2 give
line 1, lines 3 and 4 line 2 and so on. In the second half frame
line 2 and line 3 give line 2, line 4 and line 5 give line 4 and so
on. Therefore the resolution is reduced, possibly there are block
or stairs artifacts at edges. But the resolution in time is better,
and so any motion blur is reduced.
(In case of deinterlacing for video replay rough comparable with
»field averaging«.)
Non-interlaced: Half frames are made, which if necessary, are
complemented to images with full resolution by doubling the lines.
Often semi-professional (VHS/SVHS) video recorders can operate in
this mode. Resolution may be lost, but the relation in time is
clear.
(In case of deinterlacing for video replay rough comparable with
»bob« (interpolation of missing lines in each half frame) or »skip
field« (every second half frame is displayed only, its missing
lines are interpolated), resp.)
Progressive scan: The sensor is read out completely at (as far
as possible) one certain time. Highest resolution in image and
time. Technical sophisticated, but most productive method for
high-speed imaging and motion tracking.
(There are no full images, so-called »frames« in traditional
tv/video technique. Only half images, so-called »fields« are
used.)
For high-speed cameras progressive scan is the most suitable approach. In the standard image processing sector (»machine vision«), however, the interlaced methods can provide reduced data rates and due to the »double exposure« and maybe a optimized fill-factor they can increase light sensitivity.
Image memory, resolution levels, frequency and recording
time
of some SpeedCam high-speed cameras
Either the image data are gathered analogue or digital
due to the sensor design. Usually analogue image data are digitized
before storing, but there are exceptions. Normally the storage
takes place in some kind of buffer. Its location varies with the
state of the art and the demands for the camera and for the camera
system, resp.
Even the - of course, very limited - buffering of image data on
the sensor itself is possible. Especially in very fast
cameras.
If the data rate is small enough the image data can be directly
streamed to a storage medium (e.g. the harddisk of a notebook).
Therefore concerning »real« high-speed cameras this is not an
option.
Digital high-speed camera concepts
Legend to the figure at the left: RAM = image
memory; µC = microcontroller or processor; A/D = analogue to
digital conversion (often already integrated in the sensor)
The green card shall show a PC card, the red line connection
possibilities (image and control data).
The standard connection can be e.g. (Gigabit) Ethernet or
FireWire.
In SpeedCam +500/2000 and SpeedCam PRO the analogue image data
are transmitted to the control host and only now are converted into
digital values and stored. That permits to keep the real camera
head small and its power consumption (waste heat!) is modest. The
demands for the cables, however, are comparatively high due to the
analogue data transmission. On the other hand SpeedCam Visario
systems already convert the data inside the camera head and store
them there.
Because of the limited capacity the buffer memory is all the time
overwritten in some kind of endless loop. The trigger impulse
controls this process and the images are done - »cut!«.
This buffer memory, which usually holds rather »raw« image data
without color recovery algorithms (therefore the expression RAW
format), is mostly built with DRAM (Dynamic Random Access Memory)
integrated circuits similar to those known from the memory banks of
PC main memory. With the typical feature of DRAMs to loose their
data during a power failure.
Memory ICs, which keep their data even without power supply, so
called NVRAMs, SRAMs, Flash (non volatile; Static RAM), are hardly
used for buffer memory due to different disadvantages like slower
speed, higher costs, higher power consumption, shorter durability,
...
One makes do with a safety (rechargeable) battery, which supplies
the buffer memory if necessary, or even with a full service
(rechargeable) battery holding the complete camera in working
order. Sometimes with an UPS for the complete camera system,
especially for systems, which buffer their data at first in the
control unit, like SpeedCam +500/+2000 and SpeedCam Pro.
Explanation for the
diagrams:
The jumps in recording time derive from the reduction steps of the
resolution with increasing frame rate. If one voluntarily accepts
the reduction level through lower frame rates, one will be able to
drastically increase the recording time in parts. Then one moves
along the stroke-dotted lines. Perhaps in this case the recording
time may be limited by the minimum frame rate of the system. It is
about 50 frames/sec with SpeedCam +500 and SpeedCam PRO and about
10 frames/sec with SpeedCam Visario.
For better comprehension the reduction levels are drawn as
well.
(The markers in the curves are given to identify the curves in a
black-and-white print only. In real the frequencies are selectable
without steps.)
Equality is between the buffer memory and a role of film
of traditional movie cameras. Its limited capacity forces to shift
the data to a mass storage medium. Here one often uses the hard
disk of the control unit and its CD or DVD drives and the LAN, of
course.
Some high-speed cameras own a harddisk or a flash card inside her
head. Concerning applications with high mechanic loads (e.g. usage
in a crash vehicle), however, at least the harddisk, even if it is
automatically parked during the trial, causes a rest of risk. Even
if several models are specified for the loads in a crash test when
parked.
These mass storage media in the camera head, however, can
accelerate the work in a considerable manner. One takes one
sequence after another in a short period, shifts the data to the
mass storage medium and during a break or over night one downloads
the data or just exchanges the storage medium.
Every manufacturer has its own philosophy presenting the
image data more or less revised. For instance with automatic
contrast or edge enhancement. Like in photography field
professionals prefer the access on RAW images. They are not
»falsified« and very efficient. For instance uncompressed AVI files
are by factor 3.5 to 4 bigger than RAW files.
At first one sees the (potential) images through various preview
or view finder channels, which may be processed by DSP (digital
signal processor) in real time or be compressed by them. Often
simple sharpen, edge enhancement and color saturation filters are
used. Not to mention the defective pixel correction, i.e. the
interpolation of defect pixels by their neighbors.
For massive image processing in real time software is hardly to
use. Usually it operates with the data saved on the mass storage
medium and converts them to common file formats. Due to the
calculation time may be even over night.
In spot checks of serial production of safety relevant devices the
image data are stored on CD or DVD and archived. Concerning
air-bags, e.g. for ten years under the scope of product liability
law and additional three years to cover the juridical objection
period. In all thirteen years. A lot of material will gather
then.
©WP (1998 -) 2012
http://www.fen-net.de/walter.preiss/e/slomo_im.htm
Update: V8.4, 2012-03-02