Video Capture and Data Acquisition: The Engineer's Notebook

Flexible Video Capture for Non-Destructive Testing 10/29/2015


Video-based inspection techniques are at the heart of efficient inspection in a number of industries. When it is impossible to perform visual inspection work due to inaccessibility, video-based inspection techniques, utilizing borescopic inspection and testing, allow non-destructive visual inspection.

LED lighting, fiber optics, and other technological advancements have improved video quality in borescopic testing, widening the possible applications for this inspection methodology. Additionally, flexible video capture technology is at the heart of effective borescopic inspection, as it allows video signals to be processed and captured for real-time or later use. Sensoray’s video capture technology accepts a wide range of inputs – composite, component, VGA, DVI, and HD-SDI – to give operators the flexibility they need to use borescopic inspection technology to its fullest potential.

Two industry favorites are Sensoray’s Models 2246 and Models 2263. The 2246 is a USB 2.0 A/V processor that supports multiple input and output formats. The 2263 is a versatile USB A/V encoder that supports a variety of analog and digital input formats. Both models support text and graphic overlays and are available as bare boards for OEMs or housed in rugged enclosures.

Eliminate Obsolescence and Improve I/O Reliability with the Model 2600 Series 9/29/2015


In a wide range of industries, remote data acquisition, communications, and data transfer can prove troublesome, especially with the many quick-to-become-obsolete input/output (I/O) systems available on the market. As a result, industrial users across a wide range of applications find themselves in search of a reliable remote data acquisition system that offers a longer life span.

Take, for example, Speedline, whose Electrovert business manufactures wave soldering systems. The ability to tightly control the wave soldering process is critical to the overall soldering quality and repeatability. Communications and internal data transfer between sensors, controllers, and the machine operating system can reach up to thousands per second. The technology used in the design of the input/output circuits, data conversion, and cabling is key to a wave soldering process and the ability of the machine to perform as designed.

2600 Series

Speedline reached out to Sensoray with a desire for a low-cost, Ethernet-based modular I/O measurement and control system that would be protected from platform obsolescence. With this set of needs in mind, Sensoray went about designing the system that became the Model 2600 Series. Unlike I/O systems that become obsolete due to ever-evolving expansion bus standards, the Ethernet-based Model 2600 provides Speedline and other industrial users with peace of mind with its guaranteed longer life span. The system also simplifies training for assembly and service technicians, simplifies cabling, and improves overall serviceability over other I/O options.

The low cost, DIN rail mountable open-frame modules of the 2600 Series are designed to quickly snap onto standard rails, simplifying installation. If a module fails, operators can unsnap the module from the rail, snap in a new one and be ready to acquire data again in very little time. This is extremely important in the fast-paced, competitive electronics manufacturing environment, where every minute of production is valuable.

With this low-cost, Ethernet-based solution, Sensoray has presented an attractive alternative to expansion bus-based I/O systems for remote data acquisition, with greater reliability, easier installation, and a long lifespan than many other products on the market. Since its first use in 2004, the Model 2600 Series has been put into use in thousands of other applications, including machine control, industrial process control, building automation, and remote data acquisition applications in markets and industries ranging from agriculture to aerospace.

Reducing Video Latency 8/7/2015

Docking a cargo container the size of a skyscraper is tricky business. Even the most experienced captain needs the help of a pilot to dock successfully. Maneuvering in confined areas while contending with the forces of wind and tide makes docking the most difficult and dangerous part of any voyage. This is why video capture devices are often used to help the captain and pilot view those areas around the hull and dock that aren’t visible.

But a delay between what the camera sees, and what’s displayed on the monitor is another variable that compounds the difficulty. That delay is known as latency, how long it takes a frame of video to go from the camera to the display. Factor in video latency along with the variables of wind, tide, and vessel reaction time and you’ve got the potential for real trouble.

High latency means that what the pilot is seeing on the monitor is not what is happening in front of the camera. A pilot cannot afford any delay in seeing the results of his movements.

Achieving low latency, usually defined as less than 100 milliseconds (ms), is critical for this and other applications such as operating remote devices, video conferencing, streaming live events, and computer vision. This blog will explain how each piece of a video system contributes to latency

Understanding how video signals are displayed

Video is often described in terms of frames per second (FPS). The Americas use the National Television System Committee (NTSC) definition, which is 29.97 FPS. The rest of the world uses Phase Altering Line (PAL) which is usually 25 frames per second. (For the purposes of this blog we will be talking about latency with regards to NTSC standards.) When transmitting a video signal, each frame takes about 33.4ms to be transferred for NTSC.

Each frame is comprised of 480 lines of active video (525 including blanking). The video is interlaced, meaning there are two fields for each frame. One field stores the even lines and the other field stores the odd lines in the frame. A video camera does not capture both fields simultaneously; instead, the shutter captures each field at twice the frame rate (59.94 FPS) in 1/59.94 seconds (about 16.7ms). Each field is 240 lines of active video.

The way each stage in a system processes the video affects the latency in the entire system. For example, suppose we have a video system composed of a video decoder (which translates an analog video to a digital signal) an image scaler, a video compression codec, and a USB interface.

The video decoder must process the whole frame (both fields) and then writes the interlaced image into memory. The image scaler must wait for the whole frame to be available before it can resize the image – this can be done fairly quickly. The video compression codec must wait for the scaled image to be available before compression – this operation is fairly lengthy. After compression the USB interface transfers the compressed video frame data to the host computer. The total time for the video frame to be acquired by the system and transferred to the host computer is the sum total of all these stages.

As shown in Figure 1, the video decoder requires 33ms per frame; the image scaler requires 5ms per frame; the video compression codec requires 25ms per frame; and the USB interface requires 7ms per compressed frame – for a total of 70ms.

Figure 1 Video Latency

Yet each stage works independently of the other stages, while one frame is being processed by the image scaler, the next frame is being captured by the video. Figure 4 shows each stage on a separate line to illustrate how the stages work in parallel. Each frame arrives on the host with a latency of 70ms, at 33ms intervals.

Typically a video system operator specifies how much latency they are willing to tolerate. A requirement for reduced-latency preview means the operator needs to view the video at the same time as it is being recorded.

To accomplish low latency preview, the device could bypass the image scaler and video compression codec, and send the uncompressed video directly to the host. But sending uncompressed video over USB takes more time than compressed video because the number of bytes is larger per uncompressed frame. The video decoder still requires 33ms per frame and the USB interface requires 20ms per uncompressed frame, for a total of 53 ms.

Techniques for Reducing Latency

Is there a way to improve this? One way is with a video decoder that allows us to know when each field is complete by sending an interrupt. This would allow the system to quickly reassemble two fields into a frame on the host. We would be able to start sending one field early while the video decoder is processing the next field. In this case the video decoder requires 33ms per frame, or 17ms per field. The USB interface requires 20ms per uncompressed frame, or 10ms per field, so the whole frame is now transferred – with a 10ms improvement. The latency is reduced by half.

Some video decoders can be configured to interrupt at a specific line interval, and this would allow even lower latency by processing small chunks of lines instead of whole fields or frames. For example, were we to program the video decoder to deliver an interrupt every 16 lines, and transfer each chunk of lines separately over USB, latency would be reduced even further.

Figure 2 illustrates this concept. The video frame is divided into chunks of 16 lines, with a whole frame consisting of 30 chunks. Since the video frame is 33ms, each chunk of 16 lines would take about 1.1ms to process, and each 16-line chunk would take about 1ms to transfer over USB. Due to the increased frequency of starting a new USB transfer for each chunk of lines, the efficiency may be slightly reduced, and require more system processing power. The video decoder requires 1.1ms per 16 lines. The USB interface requires 1ms per 16 lines.

Now a single video frame can be transferred in 34ms. In general, latency can be reduced by processing content in smaller pieces. Despite drawing the 30X USB transfers 1ms each as a single block, the USB transfer duration per frame is still roughly 33ms, not 30ms, since there are .1ms gaps between each small transfer.

By choosing a video encoding that is limited to working at frame granularity, the latency cannot go below one frame duration. Latency can be reduced by choosing an encoding that can work at field- or line-granularity.

Figure 2 Video Latency

It is possible to mitigate system latency but we can never eliminate it. It is just a reality of the physical world. Low latency is not only critical to maritime affairs, it also proves to be a critical factor in the areas of broadcasting, medical endoscopy, unmanned aviation (drones), and even more critically, explosive ordnance disposal. And you thought bringing ships to harbour was tricky.

Key features of advanced video capture and processing devices for UAV, sUAV and drones 7/3/2015

Understandably, the characteristics of highly effective video capture and processing devices vary from application to application. Even in applications which might seem similar – manned and unmanned aerial vehicles, for example – technology demands can differ widely. In this post, we will address the specific demands of unmanned aerial vehicles (UAVs) in terms of advanced video capture and processing devices.

The primary demands on UAV video capture and processing technology are reliability despite restrictions. In short, UAV applications demand video capture and processing systems that reliably capture high-quality video, compress that video, and transmit it to operators on the ground with low latency, all while keeping weight and size to a minimum. Why is that?

  • Reliability… – Drone operators need to be able to see a high-quality, real-time feed of the video being captured by the drone in order to access information. Unlike in manned aerial systems, where video is only one aspect of a pilot’s information about the craft’s surroundings, UAV operators rely exclusively on video. High quality, low-latency video capture and processing is therefore critical to UAV success, regardless of the mission.
  • …Despite restrictions – High-quality, real-time video is easier said than done, since the nature of UAVs place restrictions on video technology. Ideal video components require little power, are lightweight and compact, and transmit video data successfully using a low-bandwidth data link. There is, however, technology on the market that can meet those challenges.

    First, take the Sensoray Model 3011 Miniature 13 Megapixel HD IP Camera. The Model 3011 requires very little power in order to deliver excellent video quality. That high-quality video can be encoded and transmitted by the lightweight, compact Sensoray Model 2960 Dragon, a board with extreme computing power and flexible architecture that’s ideal for UAV video processing systems. The Dragon faithfully compresses and transmits high-quality data, even over a low-bandwidth link.

Drones Prove Invaluable in TX, OK Flood Response 6/9/2015

As drone technology improves and their use expands more fully into the civil sector, it is becoming ever clearer just how powerful of a tool these devices can be. In the hands of law enforcement, disaster relief organizations, and even hobbyists, drones have the potential to change the way we respond to natural disasters and other crises.

The FAA’s relaxed guidelines and Certificate of Authorization (COA) process make the use of drones in these situations possible. Drones' maneuverability, ability to fly at very low altitudes, and, most important, quality live video feeds, make them ideal for crisis rescue situations.

Drone operators need to be able to see a high-quality, real-time feed of the video being captured by the drone in order to relay information like number of stranded citizens and location to rescuers. Technology like the Sensoray Model 3011 Miniature 13 Megapixel HD IP Camera makes these critical video feeds possible. The Model 3011 requires very little power in order to deliver excellent video quality in a seamless real-time video feed from drones to operators.

Texas drone in action

The flash flooding in Texas and Oklahoma is a perfect example of how useful this technology can be. Drones owned by research organizations and everyday citizens were deployed in response to the devastating floods. These drones helped to locate stranded citizens, guided rescuers to people who were stranded, and, in at least one case, physically transported rescue supplies.

The possibility that drones hold for saving lives in disaster situations is enormous. The events in Oklahoma and Texas were tragic, but the stories of rescues enabled by this emerging technology are heartening proof of how to improve future rescue operations. If you are interested in learning more about Unmanned Aerial Systems (UAS) and advocacy for increased use of this technology in response to crises, take a look at: SOAR Oregon, Lone Star UAS or Association for Unmanned Vehicle Systems International

Streaming Video vs Machine Vision: How Do They Compare? 5/7/2015

Machine vision and video streaming systems are used for a variety of purposes and each has applications for which it is best suited. Essentially, video streaming captures continuous video streams for viewing by humans, whereas machine vision products capture snapshots for viewing and analysis by computer software. Since there is some degree of overlap on where they can be used, I thought it might be useful to set out the differences.

Machine vision systems capture discrete snapshots of products to analyze for abnormalities. For example, a machine vision system might analyze the snapshot to ensure the product is the correct size, color, orientation, is free of cosmetic defects, and has no foreign objects. Most of this analysis can be conducted with just one snapshot. Machine vision is used most frequently for automatic inspection, process control, counting objects, and measuring dimensions.

Machine Vision Illustration

By contrast, video streaming systems capture an unbroken sequence of snapshots to make a continuous movie stream. Unlike machine vision, each new frame has a time relationship to other frames in the stream. Unrelated movie scenes from several cameras are often captured simultaneously using multiple cameras, for example, traffic monitoring, building security, and medical operations. The systems also perform data compression on the movie stream so it can be transmitted over available bandwidth of data links – like the Internet, radio links, and data cables (Ethernet).

Video Streaming Illustration

Unlike machine vision, streaming video must combine and synchronize audio channels to video streams to maintain lip synchronization. This is not an easy task for lengthy movie streams, but it is quite important. Consider how annoying it is to view a speaker or singer with unsynchronized audio.

Some video streaming products, including many of Sensoray’s, can restore a compressed stream to its uncompressed state for immediate viewing with little delay – so the camera data can be viewed almost instantly. Machine vision systems do not want compressed data for their inspection function. If the data was compressed, time would be spent decompressing it before passing it to analysis software. In fact, many machine vision systems use uncompressed monochrome (black and white) images to simplify their software analysis.

Some video streaming products allow embedding a small image within a larger one to produce a picture-within-a-picture. The small picture might be a magnified view or a movie narrator. This feature is not a requirement of machine vision.

Combining digital information with the video stream is also possible with some video products. For example, Sensoray systems can add closed caption text and telemetry data to a serial movie stream. Machine vision systems cannot use this type of information, making them less useful for applications in which a human audience views text translation of accompanying audio.

There is no point in using streaming video for a manufacturing line, since there is no software to analyze the frames, nor is there any use of streaming video’s embedded audio or compressed video format or its closed caption data.

There are a few other types of video processing besides streaming and machine vision. For example, astronomers capture time-spaced snapshots of a region of interest and add the images to intensify objects. They sometimes subtract frame data to detect objects moving against a background. Hence, machine vision is applied to streaming data.

To read this article in it's entirety, please visit Design News.

Our Aging Infrastructure: 3/31/2015

Get the best performance from water and sewer video pipeline inspection equipment

Pipeline inspection and repair has become a hot-button issue in recent years as the world tries to come to grips with the combination of seriously ageing infrastructure and water scarcity.

I’ve been reading up on the issue and the situation looks pretty dire. For example a CNN article “Experts: U.S. water infrastructure in trouble,” discusses EPA’s estimate that 30 percent of pipes in systems that deliver water to more than 100,000 people are between 40 and 80 years old. About 10 percent of pipes are older. The report warns that, “To fix broken water pipes today with all the new cable lines, fiber lines, electrical lines has made these fixes more complicated and more expensive.” The cost to fix the US water system? EPA’s best guess is about $335 billion over the next 20 years.

Another alarming statistic appears in the 2013 Report Card for America’s Infrastructure, which warns that much of our drinking water infrastructure is nearing the end of its useful life. There are an estimated 240,000 water main breaks per year in the United States. According to the American Water Works Association (AWWA) the cost of replacing every pipe could reach more than $1 trillion over the next few decades.

But it’s not all doom and gloom. We are adapting and developing technologies to fix the problems. According to Municipal Water and Sewer Magazine, the drain cleaning and municipal pipeline inspection industry has moved from one that could only observe existing pipe conditions to one that inspects, reviews, catalogues and analyzes collected data, allowing technicians to forecast potential problems in sewers and drainlines.

So let’s use these technologies to fix the problems, right? Not so fast. While water infrastructure inspection and repair is a growing $20 billion market, many say it still needs better technology to make maintenance affordable and more cost-effective for municipalities. For example, a report by Lux Research, “Plugging the Leaks: The Business of Water Infrastructure Repair,” discussed in Water & Wastes Digest, concludes that the best solutions will be based on technologies that can monitor the entire water infrastructure and allow owners to target sections in most urgent need of repair.

One technology that fits that bill is Sensoray’s Model 2253P Codec, which is being used for video pipeline inspection equipment. The device combines audio/video (A/V) codec with a GPS receiver, and multifunction port functionality. It can simultaneously encode, decode and preview A/V content and is housed in a rugged, compact exterior that is critical for pipeline inspection. All operating power is supplied by a single USB port, giving the device the necessary flexibility for these applications.

Multiple, independent video processors allow for two different video streams to be produced simultaneously from a single composite input. One of the streams can remain uncompressed for real time previewing, or both streams may be compressed. In addition, image transformations such as resolution, rotation and mirroring are independently configurable for each stream, as are compression type and bit rate. This means users can easily adjust the operational parameters to fit pecific requirements.

Each of the two multifunction ports included in the device can operate as an incremental quadrature encoder interface or as dual general purpose digital inputs (GPIO). Encoder counts, GPS data, and GPIO states can be monitored over the device’s USB connection, and real-time encoder counts and GPS data can be overlaid onto any video stream. What’s the benefit of that feature? Simple – it means you can collect and archive all video capture and corresponding data in tandem with other GIS mapping systems.

Figure 1 shows how the GPS and text overlay work together with the GIS mapping system to provide a complete “snapshot” of municipal infrastructure.

Encoder Illustration

Filming the Future: 2/20/2015

Subsea video technology moving to integrate HD low latency into existing systems

Recently it seems that everywhere I look I see more footage taken using subsea vehicles, whether it’s remotely operated underwater vehicles (ROVs) or autonomous underwater vehicles (AUVs).

For years, ROVs (linked to a ship by a tether or an umbilical cable and operated by a crew aboard a support vessel), and AUVs (robots that travels underwater with no operator input) have been common in deep water industries like offshore oil and gas extraction.

Now they are playing a starring role in films and documentaries, like those produced by National Geographic, the Discovery Channel, and the BBC. They are also being widely used to study the ocean, and they’ve directly led to a number of discoveries of deep sea animals and plants.

Sensoray offers several products whose small size and low power consumption make them ideal for both portable and embedded applications on ROVs and AUVs. For example, Model 2253 A/V Encoder/Decoder with Overlay is a compact USB-compatible audio/video codec that is powered from a single USB port. Multiple, independent video processors allow the unit to simultaneously produce two different video streams from its single composite input and send the streams out over USB. One of the streams can be a low-latency uncompressed stream (useful for real time previewing) and the other compressed, or both streams may be compressed.

As more ROVs and AUVs are commissioned for commercial, scientific, and educational purposes, I’ve come to believe that the future of their imaging systems is moving to high definition (HD) and low latency video capture over Ethernet. Customers are definitely looking at new HD products to integrate into their current systems. Sensoray offers Models 2263 and soon to be released 3364, which capture video from several popular types of HD sources.

Realizing a growing need for low latency HD capture indispensable for ROVs Sensoray is developing technology that combines high video quality with latencies under 100 ms.

By all accounts, devising next generation imaging systems will not be a simple matter. According to an interesting article I read called Imaging Systems for Advanced Underwater Vehicles in the Journal of Maritime Research, sub-sea images have special characteristics that have to be taken into account when designing an underwater optical imaging system – due especially to the different interactions of water and light.

Also, if the imaging system has to be mounted to an ROV or AUV, the system’s autonomy is very important, so human intervention must be kept to a minimum. Manual configuration must be avoided during image acquisition. With AUVs that have no link to a support vessel, power consumption has to be considered, and no power demanding illumination systems can be used. Sensoray has a wide choice of video capture products that fit into most tight power budgets. Among those – an HD IP camera, model 3011, consuming less than 3 Watts of power.

Huge market predicted

To give you an idea of how huge the potential market is for ROVs and AUVs, and why we who develop components for imaging systems for these vehicles should be involved, here’s some information on market projections for the industry as a whole.

Unmanned Underwater Vehicles Marketworth $4.84 Billion by 2019, by market research firm Markets and Markets, states that the global ROV market is estimated to be $1.2 billion in 2014 and expected to achieve a compound annual growth rate (CAGR) of 20.11 percent in 2019. The global autonomous underwater vehicle (AUV) market is estimated to be $457 million in 2014, with an expected CAGR of 31.95 percent in 2019.

This information interested me greatly and I did some more digging, eventually coming upon a report by energy business advisors Douglas-Westwood. Their World ROV Operations Market Forecast 2013-2017 estimates total ROV operations expenditure of $9.7 billion, an increase of nearly 80 percent over the previous five-year period. About three-quarters of the expenditure is expected to be for drilling support, which is expected to increase over the period by 13 percent.

These are some very interesting predictions and are likely to have a huge impact on the development of new imaging systems for subsea vehicles of all kinds.

And We’re Rolling: 1/7/2015

An introduction to video capture and data acquisition: The Engineer’s Notebook

Hello, and welcome to Sensoray’s new blog, Video Capture and Data Acquisition: The Engineer’s Notebook. Our team is consistently striving to fulfill its mission to develop and supply the latest, highest quality and most effective OEM electronics for video imaging, data acquisition and machine control to our customers. As a part of fulfilling that mission, this blog will draw on the extensive industry experience of our team to discuss trends within the electronics industry. In addition, we hope that your contributions on this platform will allow us to chart a course through the most cutting-edge information available, shaping the future of electronics as it occurs.

The entire Sensoray team is passionate about innovations in our field, and our team of bloggers is really looking forward to sharing their thoughts with customers and others interested in OEM electronics technology.

As a company, we are committed to technical excellence and design innovation. Since 1982, Sensoray has offered customers practical, reliable solutions alongside outstanding live technical support and service. We provide tools to aid rapid development, ensure customer success, and with the addition of this blog, shed some light on trends and the future within our industry.

But the blog is not just about us: help us fit it to your needs by suggesting topics you would like to hear about. The possibilities are nearly boundless, and so we would like your helping in shaping the future of this platform.

Once again, welcome! We’re looking forward to sharing all kinds of exciting electronics news with you here.

SENSORAY | 7313 SW Tech Center Dr. | Tigard, OR 97223 | 503.684.8005 | Email Us

CONTACT SENSORAY

Employment | Privacy Policy | Press Releases | Copyright © 1982-2024 Sensoray ~ All Rights Reserved