Image Enhancement in Heavily Degraded Visual Environments using Image Processing Methods

From Army 17.1 SBIR Solicitation

A17-046 TITLE: Image Enhancement in Heavily Degraded Visual Environments using Image Processing Methods

TECHNOLOGY AREA(S): Electronics

OBJECTIVE: Produce an enhanced image suitable for driving ground vehicles in heavily degraded visual environments with minimal latency (<80 ms) using image processing methods. This is not a solicitation for new camera hardware.

DESCRIPTION: This effort would develop a method to enhance IR video imagery in heavily degraded visual environments using image processing. In heavily degraded environments, it has been shown that in many cases some photons do get through the obscurant, but typical image processing techniques (acutance/contrast/edge enhancement) are inadequate because the SNR is too low. This topic seeks new computational image processing techniques that can be used to extract very small signals (SNR<1) from noisy, degraded images, with the goal of safely driving vehicles in such an environment. Inherent in this method must be the ability to function when the camera (vehicle) is in motion. A priori knowledge of the un-degraded scene should not be assumed.

PHASE I: Demonstrate a computational methodology that can extract small, SNR<1 signals from heavily degraded video imagery. The resultant imagery should be sufficiently enhanced to allow driving in the degraded environment at low speeds.

PHASE II: Implement the enhancement from Phase I in real time with minimal latency (<80 ms) suitable for driving a ground vehicle at 16 kph in heavy dust. Implement the method in hardware (e.g. GPU, FPGA) and demonstrate the ability to implement in real time.

PHASE III DUAL USE APPLICATIONS: Test the ability of humans to safely drive using the Phase II system in a degraded environment. Modify the algorithm as required to improve detection accuracy and processing speed to maximize vehicle speed in the degraded environment. Successful testing should allow deployment of the system on any vehicle in degraded environments, such as supply convoys, or ground patrols. Similar capabilities might be useful in the commercial world for long haul truckers and similar vehicles that must keep moving under poor visibility conditions. Eventually autonomous vehicles could use similar technology, with computer vision instead of human drivers.

REFERENCES:

1. “LWIR thermal imaging through dust obscuration”, Forrest A. Smith, Eddie L. Jacobs, Srikant Chari and Jason Brooks, Proc. Of SPIE Vol. 8014 80140G-12;

2. “See-through Obscurants via Compressive Sensing in Degraded Visual Environment”, Richard Lau, T.K. Woodward, Proc. Of SPIE Vol. 9484 94840F-8

3. “Real-Time Convex Optimization in Signal Processing”, Jacob Mattingley and Stephen Boyd, IEEE Signal Processing Magazine [61] May 2010.

KEYWORDS: DVE, degraded visual environments, real time image enhancement

TPOC-1: Brian Kowalewski

Phone: 703-704-3060

Email: brian.j.kowalewski.civ@mail.mil

Submitted Proposal: A171-046-0874

Note: Formatting changed to suite web page layout, page title blocks and disclosure restriction blocks have been removed from text.

Abstract

Improved video image processing for infrared vehicle cameras to enhance images and image data during low visibility and/or quick changes in visibility conditions will improve driving safety for drivers, unmanned vehicle operation, and upcoming autonomous vehicles. This work creates software tools to more thoroughly analyze very low signal-to-noise video images in the 2 dimensional image domain, use multi-frame correlation in the time domain, and the tracking of useful image aspects to enhance display for drivers, by image enhancement and/or create false color overlays. This has potential use in military convoys, travel, and commercial long haul trucking.

1 Identification and Significance of the Problem or Opportunity

For a driver or a future autonomous driving unit, being able to visually see objects and dangers well enough to safely maneuver in low-visibility driving conditions is very important. This can include driving in a known poor visibility condition and in a sudden unpredictable change of visibility. There is a need to do image processing of low signal to noise image data to detect useful image features and information that can enhance image display and/or provide false image overlays that will make it safe to continue driving in the poor visibility environment.

1.1 The requirement

In order for a visual object to be detectable in an image, it must influence the average sensor's A/D (Analog to Digital Converter) output count by greater than 0.5 to 1.0 counts, either lower or higher counts, compared to the surrounding incoming white-out flux across a moderate number of pixels for multi-pixel objects, assuming the A/D are reasonably matched to the detectors (standard deviation of the random readout noise is 8 or less). Detector readout noise is different than image noise, but may appear to be the same when readout noise is the dominate noise. For very small image objects that is close to pixel size, the flux readout influence for that object must be larger than the standard deviation of the readout noise. There is a transition between the small object requirements and the larger object requirements, but as single pixel objects get larger it quickly transitions to the multi-pixel requirement. If these criteria is not met, there is no detectable data for these objects in the image. This criteria is being used because SNR values can be based on different aspects of a image data and noise and SNR must actually be above 0. If there is no data, no amount of processing is going to enhance image with no data to become usable image data.

Random noise triggering of the detection of a shaped object using AI learning/correlation algorithms, as have been recently reported in some science magazines, however is just invalid detection triggering. Although it provides some interesting information and speculation of the results of correlation and applying of parallel correlation that might be occurring in the human brain, currently it is probably much to unreliable to be applicable to any condition that safety is a concern.

1.2 Time Domain Processing

Since a video has a series of frames, if an object influences some of the frames but not all of the frames, (dust is swirling around), the objects might be able to be tracked across time, or across multiple frames and false images can be used to temporary fill in the missing data. Human visual perception framing time is well below 30 frames per second and sensor frame rate is often much faster, such as 60 frames per second. Frame to frame data image correlation maybe useful for filling this in when the image to image position data is stable or there is a some kind of image feature across the frames that can be used for image alignment.

For longer periods of no video data conditions, a predictive algorithm can be used to provide false images of where the last visible image was or should be, given the estimated object velocity and location of last known features. This false image could help maintain lower user cognitive loads and give more driver confidence, however, as a time passes, the data quickly becomes unreliable and the danger level to the driver quickly increases. Fortunately most computer systems have audio hardware, adding a couple of small speakers for mid and high frequency generation and microphones to the front, choosing a small audio frequency range that currently has low content, can be used to echo off of possible vehicles in front, which can be used for location and if their speed is changing relative to the vehicle, as in the case of a vehicle in front is starting to stop or has stopped. This speed and distance information can be used to enhance the false date images to help improve safety while either more real image data becomes available or travel needs to slow or stop. Low cost, low resolution cameras can be placed on the fenders for short distance viewing of the curb and centerlines, that video processing can be used to scale, compress and/or stretch to match the drivers image view point and used to reliably enhanced in the drivers image display in extremely poor driving conditions. Other reliable active and passive sensor systems can be used to provide input data for video image queuing, to improve drivability or reduce driving danger in a poor visibility condition, but is outside the scope of this project, since analysis of other input outside the infrared camera was not called for in this topic in the BAA.

1.3 Approach

The image data must be carefully studied to determine what aspects can be used to enhance the data. Different aspects can require different methods and various algorithms can be tried for each of the aspects. Different portions of the image may have different aspects, so the optimal algorithms can be applied to the different portions of the image. Before the optimal aspect processing can be applied, detection algorithms for these aspects that will improve the image data must be used. The detection aspect processing may just be to apply multiple processing algorithms and choose the one that works the best for that data. This type of processing based on the data content and the processing of different data areas for the best results, is often referred to as "non-linear image processing".

However, since a video has a series of frames, if an object influences the some of the frames but not all of the frames (dust is swirling around), the objects might be able to be tracked across time, or across multiple frames and false images can be used to temporary fill in the missing data. The image data may be constantly shifting across the image area, when data can be shifted to overlay each other to get stable image areas for a short time and averaged together, this can be used to increase the signal to noise ratio, especially in areas further away with lower signal to noise. Closer objects, which tend to have higher signal to noise, the image is more affected by velocity of the vehicle, will have a frame to frame object size change, which could be used to estimate distance of stationary objects when the vehicles velocity is known and when the size change rate can be accurately calculated.

The bottom line on this work is primarily the study and analysis of the data to determine what image processing algorithms/mathematics are effective in bringing data out and detecting data features. What is useful for a dust environment may not be applicable to a different environment such as a fog environment and of course would not be useful in a dust environment that is closely approaching the zero data white-out condition, which no amount of video processing is going to help. Other equipment, such as radar and/or sound echo processing would be more effective for handling vehicle movement during a total video white-out condition.

1.1 Analysis Tools

Some generic image processing tools are available, but do not compare with a tool that are specifically made for analysis of image data and which also allows for the quick creation of new algorithms for image processing tests. Being able to immediately see the effects of a test algorithm is important to the rapid development of optimized algorithm set.

Analysis visual tools can be very important in analyzing image data. Shown in Figure 1: Early version of an Image Analysis Program is an early version of a data analysis program used to analyze infrared sensor data. In this particular image we see structured data.

Image Analysis Program

Figure 1: Early version of an Image Analysis Program

(Note: Sensor performance data removed from image.)

This screen shot is from a line scanning infrared camera and the data is data captured from a sensor pointing at a Blackbody, (which is a constant temperature area tool), that was being evaluated to determine if it could be used during testing. Time is from left to right. In this we can see black body temperature is quickly fluctuating in order to hold its temperature at 100C. It is really not that the Blackbody is very poor, it is just that the sensor system is ultra sensitive.

The data is brought out as a false color from being able to do fine scaling. Below the image is a scale bar to do scaling outside the auto-scale range. The bars from left to right represent the full possible data binary range. The bottom lines separation represents the data range for the unequalized detector data range. The upper bars represents the equalized detector data range, which is the display range normally set by the autoscale operation. The vertical line is the mean of the data. The 2 small rectangles is the current minimum and maximum range being displayed, which can be set using the mouse cursor to drag these left and right. The image data being shown is 256 color range of false colored data between these 2 two button setting on the scale bar where the dark is the lowest level and the red is the highest level. This scaling bar control scheme and code was written by me, but is the type of tools needed to quickly do detailed image data analysis. The speckled data is the white noise floor of the data.

Blackbody Test

Figure 2: 12 of the Same Temperature Tests of Blackbody

Shown in Figure 2: 12 of the Same Temperature Tests of Blackbody, is the results of 12 repeated tests at the same temperature. The highest noise level is the above image for that data point. Being able to analyze data in very good detail can be very important to successful evaluation and being able to apply the right algorithms. The Blackbody stability issues was one item of several issues that occasional moderate level temperature correction resulted in large percentage of sensor test and calibration failures, which caused failure to complete valid testing of a camera and delayed deployment. This particular issue started when a newer Blackbody, with an easy to set temperature, was used in place of the older more stable Blackbody, which was tedious to set due to only having up and down temperature buttons that took a long time to set.

A later version of the image analysis program is shown in Figure 3: Recent Image Display Program Showing Zoomed Pixels. This shows image data with a zoom level of 10, allowing the individual pixels and results to be easily examined. Although the cursor is not shown, the information about the data point is being shown below the scale control bar. Algorithm control buttons at the upper right, statistics and current image information is shown below the control buttons. The standard drop down menu with additional control and plot buttons in a toolbar below the toolbar for allowing minimum effort for analysis of data.

Newer IR Program

Figure 3: Recent Image Display Program Showing Zoomed Pixels

(Note: Sensor performance data removed from image.)

An example of applying a complex custom algorithm set is shown in Figure 4: Complex Algorithm Control Dialog Box. In this case a thorough evaluation of the detectors was required. After careful studying of the detector data, and coming up with formulas and algorithms to evaluate the detectors, this control box was created to tune and execute the algorithms, which worked very well.

Infrared image data is not like standard video data that we view on TVs every day, which are generally high signal to noise image data. The program that I been describing is for a cryogenically cooled ultra sensitive infrared sensor system with a very unique data format and needs. However an equivalent program is needed to properly evaluate the sensor data for this project. An off-the-shelf type program probably will not have the right characteristics, probably not economically feasible for a small project, and/or probably not have the ability of adding new algorithms by simply writing some more lines of source code and adding some control to enable or disable the algorithms to quickly see their effects.

I can write a basic analysis program with a similar display layout as shown, for careful analysis of the degraded video data. Data needs to be uncompressed or uncompressed files can be made from the source video with readily available tool. If I have to write decompression code, this is a huge unknown, however, since I have written a program with this layout format before, this is straightforward for me. Careful analysis and study of the image data, is the primary way and probably the best way to determine the best algorithms to use to bring out the most details.

Complex Control

Figure 4: Complex Algorithm Control Dialog Box

2 Phase I Technical Objectives.

The Phase I and Phase I Option objectives are:

  1. Obtain IR Image Data and Specifications
  2. Discover Tools and Resources for working with the current Video Data
  3. Create a Video Image Analysis Program
  4. Study Video Data, Test Image Processing Algorithms
  5. Research Target Vehicle's Execution Environment
  6. Evaluate Video Processing across Multiple Video Data Quality and Sets
  7. Create a Phase II Proposal

3 Phase I Statement of Work

The work for Phase I is 6 Months Phase I Option is 3 Months in length and will be conducted at Lightning Fast Data Technology Inc. Headquarters (3419 NE 166th Ave, Vancouver, WA 98682). Note: Tasks are not listed in sequential work order since it covers multiple employees.

3.1 Phase I Tasks

3.1.1 Task 1: Obtain IR Image Data and Specifications

This coordinates with Army to discover what IR video is available for analysis, coordinates to obtain data, prepares data for internal computer access, the obtains stored video data format, the camera communications streaming data format, and researches IR camera information and any other related information.

3.1.2 Task 2: Create Analysis Program to Analyze IR Video Data

This is a custom Window's based program that will allow choosing and loading of multiple video frames, zooming, viewing, and scrolling of video data, does statistical analysis of the video data frames, has scaling tools and algorithms to assist in scaling of data for evaluation, uses false display color to assist in viewing and analyzing data, etc.. This will be a foundation program for testing of video processing techniques and algorithms, which is a separate task. This interactive tool will probably operate reasonably fast but is not assumed to work and display at realtime video rates. It will be able to process video files, apply video processing see results before and after, save post-processed video files in the same format, which can then be played by a display program, (assumed to exist and available for use) at video frame rate. This will leverage a huge amount of internal software components, which will be put into a support library. Have developed techniques that will keep the programs window's interface fully interactive at all times even when doing time consuming work such as executing the video processing of a large video file.

3.1.1 Task 3: Discover Tools and Resources for working with Video Data

This is to coordinate with Army to discover and obtain any programs that they currently have for display and/or analysis of the IR video data, and to learn how to use these programs. Search for availability of open source programs, camera manufacture supplied programs, and any other programs that are available that could assist in working with the video data.

3.1.2 Task 4: Analysis Program Support Library

Internally, a huge amount of privately developed source code is available for greatly reducing the effort and amount of time to create a complex Analysis program. This includes a large data set memory manager, status and error logging library and display window, document creation components, screen capture and image compression for documents, plot windows, program display toolbars, buttons, objects, etc., etc. These components will be put in a support library as needed for support of the video Analysis program. However, it will take direct project work to copy from the other source libraries to put together into this support library. This source code was privately developed and has Technical Data Rights, (see Technical Data Rights Assertions section).

3.1.3 Task 5: Study Video Data, Test Image Processing Algorithms

The video data will be carefully studied for content and various video processing algorithms will be tried to determine what methods and algorithms can be applied that will bring out detectable or displayable data content. There are a multitude of edge enhancement and detection, single dimensional and 2 dimensional data filters, non-linear filters, multi-frame image processing, data image tracking, and noise and data analysis techniques that can be tried and utilized. Detection of good and degraded image or image areas can be used to determine when to apply degraded image processing and when to supply image enhancement or display queuing such as analyzed image false color overlays. Results and discussion of algorithm experiments will be reported in monthly reports and fully covered in the final report. Note: The available time for Phase I and Phase I Option to evaluate and test possible image processing techniques and algorithms is far too short to exhaustively test them all, however, by the end of Phase I, should have a good idea if I think there is recoverable data in the video content that can be used to enhance the display of data or allow for false data overlays in the video display.

3.1.4 Task 6: Research Target Vehicle's Execution Environment

During Pre-Release discussions it was discovered that there was a strong interest for the developed image processing algorithms to execute in the on-vehicle's compute equipment. This effort is to discover the architecture of the computer equipment, the configurations video camera resources, available processing resources, what are the acceptable new equipment (if any), what tools such as compilers and development tools would be required to support an integration effort and the cost of these tools, if we need to coordinate with other integration efforts, what video hardware accelerators are available and what development tools and cost are needed for these, etc. etc.. This is to create a document of detailed needs, requirements, and costs that would probably be encountered in order to support creating programming for specific equipment, and get an understanding of what would be required if an integration effort for the developed video enhancement algorithms was to be added to the system. Create a "Integration Information Report".

3.1.5 Task 7: Project Management

General project support and management. Work scheduling, planning, and progress tracking and review. SBIR, government contracting compliance review and planning. Monthly review and reanalysis of cost estimate for project.

3.1.6 Task 8: Phase I Monthly Status and Progress Reports

Status and Progress Reports document the status of overall project, the projects objectives for the month, the progress of each task, results obtained, and any concerns. Provided within 15 days after the completion of each month, but excludes the last month which is included in the Final Report.

3.1.7 Task 9: CMRA Reporting

Create Non-Proprietary Summary Report as described in ARMY 17.1 Small Business Innovative Research (SBIR) Proposal Submission Instructions.

3.1.8 Task 10: Phase I Non-Proprietary Summary Report

Create Non-Proprietary Summary Report as described in ARMY 17.1 Small Business Innovative Research (SBIR) Proposal Submission Instructions.

3.1.9 Task 11: Phase I Final Report

Contains detailed information for project objectives, work performed, results obtained, and estimates of technical feasibility. Provided within 30 days of Phase I completion.

3.2 Phase I Option Tasks

3.2.1 Option Task 1: Study Video Data, Test Image Processing Algorithms

Continue process of evaluation of heavily degrade video data, and testing, experimenting, and finding best ways of processing and improving image processing algorithms for the heavily degraded video. Set task description in Phase I.

3.2.1 Option Task 2: Obtain Additional Video of Other Video Environments

Work to obtain more video content other then the target heavily degraded visual environment. It would be desirable for the video to transition from good visibility, into the heavily degraded visibility, back into a good visibility environment. Long running videos would be useful.

3.2.2 Option Task 3: Study the Processing of Video Data

Apply the video processing algorithms as directed by the Principle Investigator, to the transitioning, less degraded, and longer running video data sets. Play at video frame rate and evaluate different aspects of processing results. Document interesting results and provide Principle Investigator the results and bring attention to effects and useful details.

3.2.3 Option Task 4: Project Management

General project support and management. Work scheduling, planning, and progress tracking and review. SBIR, government contracting compliance review and planning.

3.2.4 Option Task 5: Phase II Proposal

Write the Phase II Proposal for "Image Enhancement in Heavily Degraded Visual Environment using Image Processing Methods".

3.2.5 Option Task 6: Phase I Option Monthly Status and Progress Reports

Same as described in Phase I, but for the Phase I Option time period.

3.2.6 Option Task 7: CMRA Reporting

Create Non-Proprietary Summary Report as described in ARMY 17.1 Small Business Innovative Research (SBIR) Proposal Submission Instructions.

3.2.7 Option Task 8: Phase I Option Non-Proprietary Summary Report

Create Non-Proprietary Summary Report as described in ARMY 17.1 Small Business Innovative Research (SBIR) Proposal Submission Instructions.

3.2.8 Option Task 9: Phase I Option Final Report

Contains detailed information for project objectives, work performed, results obtained, and estimates of technical feasibility. Provided within 30 days of Phase I Option completion.

3.3 Deliverables

  1. IR Video Analysis Software: Window's based program for analysis of video data used to help develop .
  2. Video Analysis Software Support Library: Privately developed program source code compiled into a library to reduce the work needed to create the IR Video Analysis Software.
  3. IR Video Analysis Program User Manual: A user manual for using the Analysis program, features, install and system requirements.
  4. Image Process Algorithm Document: This is documentation of video evaluation notes, algorithm tests, results, details, etc. to internally keep track of the information. It is not really meant for viewing or publication and no effort will be done to make it presentable. The important information and results will be summarized in the monthly and final reports.
  5. Integration Information Report: Summary of research done to discover the work needed to integrate new algorithms into the vehicle's IR camera compute equipment.
  6. Monthly Status and Progress Reports: Monthly Status Reports for Phase I and Phase I Option. These are primary for ongoing project status and technical progress, to keep contract officer informed.
  7. CMRA Reporting: Required, MS Excel Spread Sheet or Manual Entry (method TBD) as described in ARMY 17.1 Small Business Innovative Research (SBIR) Proposal Submission Instructions.
  8. Non-Proprietary Summary Report: Required as described in ARMY 17.1 Small Business Innovative Research (SBIR) Proposal Submission Instructions
  9. Phase I and Phase I Option Final Reports: Contains detailed information for project objectives, work performed, results obtained, and estimates of technical feasibility. Provided within 30 days of Phase I and Phase I Option completion.

3.4 Technical Data Rights Assertions

Technical Data for Restrictions Basis of Assertion Asserted Rights Name of Person
Video Analysis Support Library Library of software components developed at private expense. SBIR Data Rights and Limited Rights Lighting Fast Data Technology Inc.

4 Related Work.

Have 7 years experience on high accuracy airborne infrared sensor signal processing R&D. This had similar issues and considerations and working with low signal to noise image data. Learned to use many statistical and high quality methods for data measurement, and improved my ability to develop signal processing algorithms to help study data noise, and using data to help develop signal processing algorithms. This work resulted in signal processor prototypes and sensor emulators delivered to Huntsville which provided proof of concept for the eventual creation of a 747 mounted IR platform; but I left Boeing, (1990), during initial preliminary sensor-platform testing and before deployment. Project far too old for contacts.

In 2004, I started working on the U-2 "Dragon Lady" infrared camera, considered a military asset. Litton, (bought by Northrop Grumman before 2004) in Tempe, AZ, had developed the IR sensors for the 747 airborne sensor platform, (discussed above), had evolved the detector readout hardware and created a compact infrared sensor in a small Dewar to be used as a camera. However, there was trouble getting reliability repeated, and consistent operation. Improved the sensor test and calibration software and capture capability of the data during the test, (had to go through acceptance testing for the improvements). Created an analysis and display program to look very closely and analyze the captured data and understand why the sensors and testing was not always working as expected. Created very high quality analysis code and added complex analysis algorithms that were developed from careful study of the sensor output data and it became obvious whether the system was working as expected and when there were problems. Eventually, the software became critical to reprocessing the test data and completing final calibrations and IR sensor acceptance for field deployment.

The IR sensor chip testing had also become stalled and testing often went on an almost blind, 4 hour test that had questionable results. A small company was hired to build readout electronics for 16 bit A/Ds that was selected and I wrote a test program for control, data display, and analysis. This allowed the data be quickly seen, the test could be paused and the data could be studied. This often allowed the tester to quickly determine if the sensor chip was functioning, which was often an issue, usually from poor test socket connection after the cryogenic cooling cycle of the test Dewar, saving considerable time. The original test algorithms was optimized, the algorithms solved for edge conditions, and reduced the number of test steps needed which allowed extensive testing to be done in less than one hour. After the test, the data could be studied and reprocessed. This reprocessing became very useful since I was doing a continuous improvement between sensor work and verifying sensor test results and organizing sensors by quality for next choice for replacement and deployment. Eventually the program was automatically generating a very high quality RTF document with extensive statistics tables, colored images and plots, that could be opened with MS Word. After the report generation improvements, all of the earlier test data was reprocessed for the high quality reports.

During this time, L3 Communications had bought the facility and then Goodrich (the Prime Contractor) bought the project and moved it to their "Sensors Unlimited" business in 2009. I didn't want to move to the east coast but continued to support the project through "Sensors Unlimited", adding analysis improvements and adding support for a camera with a different IR filter combination and expanded test needs. I delivered over 20 versions; (some intermediate, but last on was version 26 in 2014 and was still being used for sensor camera calibration), of this very large and complex program that operated as expected and without ever crashing, (that I am aware of). Both the analysis and device test programs are well over 70,000 lines of C code each sharing about a 30,000 lines of C code in a shared library, (over 99.99% written by me).

This work is related in being able to develop complex analytical and signal processing algorithms and code base and to carry the very complex signal processing project through a large constant improvement development process to completion. Sensor Unlimited contacts: (Contact infomation Removed).

5 Relationship with Future Research or Research and Development.

(a) The anticipated results of Phase I and Phase I Option will show that either detectable data exists or does not exist in the video data. Also have determined out some of the types of signal and/or image processing techniques or algorithms that will help bring out detectable data. Also have an idea of when the image processing should be, or not be applied, to help establish some of the process triggering algorithms.

(b) The significance of the Phase I objective is the development of a specific analysis tool to evaluate and study the target infrared input video data and post processed video data in fine detail to determine the effectiveness of processing. This provides a prelude to Phase II which will allow further data evaluations and algorithm testing which results can then be used to plan an image processing algorithm pipeline algorithm set.

This creates a foundation for Phase II, where the image processing pipeline is created and used to determine the effectiveness of applied algorithm set and allows improvement cycles as more detailed analysis is applied to the processing pipeline algorithms. It is only after a well defined set of algorithms has been established, that realtime performance issues of accelerators can be addressed and the effective use of hardware can be evaluated and realtime hardware GPU and FPGA,s can be considered. However, initial optimization for realtime algorithms will be the effective operation of the pipeline algorithms on a Xeon processor, since general purpose flexibly allows algorithms to be carefully tuned before the costly process of hardware implementation.

(c): Regarding clearances, certifications, and approvals, no requirements were posted in the DoD SBIR BAA for Phase I and Phase II R&D work of this topic. The focus will be on the unclassified data sets and unclassified equipment and architectures to allow best return of the Phase II cost structure. Classified work is for Phase III work, where the security requirements, if required, can be clearly specified and the cost structures to support these can be addressed and fully supported.

6 Commercialization Strategy

Improved infrared image processing in poor visibly conditions has both military and commercial benefits for enhanced video for drivers, unmanned vehicles, and autonomous vehicles.

For the military, better image processing infrared video data in poor degraded visibility environment can enhance ability of reliable and safe movement of personnel and equipment in wider varying conditions. May also give advantages in military actions with possible reduction in casualties when adverse conditions are easier to overcome.

Use with commercial vehicles, such as long haul truckers, will improve safety when encountering adverse weather and visibility conditions. And in some cases, may allow continued movement in the reduced visibility condition. As commercial equipment evolves and comes into more use, COTS (Commercial Off The Shelf) equipment with greater capacity at lower costs will probably help improve military capabilities at lower costs.

The infrared analysis program will provide improved image analysis capabilities for unusual conditions that may be leveraged in analyzing other types of image data from varying image conditions as well as a tool which can be used in other projects.

7 Key Personnel.

PRINCIPLE INVESTIGATOR: Mike Polehn

Oregon State University, BS in Computer and Electrical Engineering, 1983

RELEVANT EXPERIENCE

R&D experience for airborne infrared optical sensor tracking systems for potentially harsh environment. Did sensor evaluations, did signal processing algorithm development, did extensive simulations and processing performance evaluations, designed and delivered high performance pipeline signal processors to internal and DoD facilities. Extensive test and evaluation of infrared sensor camera, performance and data, test program improvement, extensive analysis computer programs, operational and test issues resolution, algorithm development, and operational performance improvement.

RESUME

Senior computer development engineer with 30 years experience of Digital Signal Processor (DSP) development, Device Driver development, Infrared Sensor digital signal processing and sensor analysis, and embedded development. Have both SW (primary C/C++) and HW development experience for the full development cycle of definition, document, design, development, debug, and test. Combining HW, SW, and analytical experience provides superior computer engineering capabilities over any of these skills by themselves. Flexible, self-directed, and an independent thinker, capable of doing very complex work with little or no supervision.

Intel: Wrote programs that utilized Xeon CPUs to do high speed realtime network packet processing on host or in VMs, extensive network performance characterization, Telco communications, PBX interfaces, audio subsystems, video conferencing, embedded controllers, device drivers, BIOS work, and Linux, NetBSD, Windows software and driver development work. Detailed Linux communications protocol stack section CPU clock usage measurements and detailed block diagrams for 3 different Xeon CPU generations and experiments to improve protocol stack performance, including calling into the device driver from socket queue code when the socket RX queue is empty to push any new network device packets through protocol stack to the socket RX queue.

Northrop Grumman, L3 Communications, Sensors Unlimited, Boeing: Airborne infrared sensor R&D, sensor data analysis, algorithm development, signal processor HW development, Window's based test and analysis programs, night vision goggle and scope test programs, data and performance studies, infrared camera tuning and issue resolution and operational performance improvements, and FPA sensor chip test, analysis, and doc data improvements.

Diamond Multimedia, RadiSys, Oresis Communications, Racal Data Communications, Acers Gaming, RedcellX, Columbia Sportswear, Advanced Technology Labs: Spec communications modems, telecom switching equipment, modem data pump code, BIOS code, embedded firmware, hardware design, device drivers, ultrasound equipment.

Hardware and Software Development Skills Summary

CPUs and uPCs Used: Xeon, ADSP-BF531 BlackFin, MCF5280 ColdFire, MCF5272 ColdFire, MCP859T embedded PowerPC, x86 (88, 386 to P4), National 486 embedded CPU, 32 bit ARM, PowerPC, CR32 RISC, TMS320C30, TMS320C51, 68000, 68010, 68020, Z8000, Z80.

Hardware Device Types: DSPs, PCs, NOR FLASH, NAND FLASH, SRAM, SDRAM, FIFOs, TTL, CMOS, ECL, GAL, PAL, PLD, FPGA, A/D, D/A, Audio Devices, VLSI Devices, Boost and Buck regulators, and Analog Devices.

Languages: C, C++, Fortran, Pascal, Basic, Cobol, Assembly, Shell Scripts.

HW Development Tools: Mentor schematic tools, Orcad schematic and board layout tools, state machine and programmable logic tools, timing analysis, and SPICE. DSP & PC Emulators, Logic Analyzers, Oscilloscopes, and HW prototyping tools.

Languages: C, C++, Fortran, Microsoft Visual Basic 6.0, Cobol, Assembly, HTML, PERL & BASH Shell Scripts.

Assembly: X86 32 & 16 bit Protected, Real, and mixed. GAS, MASM, TMS320C51, TMS320C30, 68000, various embedded controllers.

Telecom Device Drivers: T1, E1, ISDN, B, D, 2B+D, aLaw, uLaw, HDLC, V120, V110, G728, interface statistics, and PBX D channel call management.

Computer and Embedded Device Drivers: Ethernet, Sound, Serial, Parallel, PCMCIA, PIC, PnP Enabler, DMA, Console, EIDE ATAPI CD-ROM, USB, RTC, Timer, PC BIOS Code, I2C, Fan & Sensor, various special interfaces.

Operating System Side Code: System Services Use, Systems Service Modifications, Device User Interface, Software ISRs, Hardware ISRs, Sound Mixer, Sound File Decoder, CD ROM File System, Operating System Loader, Program Relocatable Loader, System Memory and Page Descriptor Management.

General Applications: Window’s programs, Windows memory use sort, classifier, and analyzer. TCP/IP Servers, Clients, Winsock, UNIX Socket, and TCP/IP specific communications protocol monitor and log. Real world signal data analyzing software. Data simulators and algorithm performance analyzer programs.

Embedded Real Time Operating Systems: VxWorks & Tornado, PSOS+, BSD 4.4 Unix as RTOS, LINUX as RTOS, SPOX, QNX, created primitive RTOS, and embedded and loadable programs with no commercial OS or RTOS present.

General Operating Systems: DOS, Win 3.1, Win 95, NT, ME, 2000, XP, NetBSD (4.4 BSD UNIX derivatives), LINUX, and VAX.

Graphical User Interface (GUI): Windows SDK and GUI, Graphics Libraries, Tektronix Terminal GUI.

Program Development Tools: Windows SDK and Windows DDK, Microsoft Visual Studio Visual C++, C, Microsoft Visual Basic 6.0, Assemblers, various Editors and Debuggers. Includes Microsoft 16 and 32 bit compilers of C, C++, and MSVC 1.0 though 6.0. Borland 16 and 32 bit. Source code management with CVS, MKS Source Integrity, PVCS, Source Safe, and Git. Hardware debugging tools included ICE and uPC Emulators, Logic Analyzers, and Oscilloscopes.

Cross Platform: Windows X-Window Servers, Windows Cygwin Bash shell environment and tools, cross compilers.

UNIX, LINUX Development: GCC, GAS, GDB with various user interfaces, VI, CVS, BASH, KORN, Make, PERL & Sell Scripts. LINUX 2.4 and 2.2.10, NetBSD 1.5 , BSD 4.4 kernels, Have written user side programs, Device Drivers, loadable OS Modules, and have made Operating System code modifications. Modules and compile time device configuration. UNIX SW development tools, created & submitted open source kernel and project patches.

UNIX, LINUX System Administration: SuSE (preferred), Redhat. Kernel configurations and builds. Run Level creations, modifications and management. System HW, Disk, Disk Partition, Organization, Assembly, and OS Installation. SAMBA, Apache, CVS Server, NFS Client and Server Management. System Networking, HW configuration, IP address administration, Internet, Intranet, and Firewall DMZ, Internet Security, IP Chains/Rules. User and Group Management. Note : Primary a development engineer. System, Network, and code administration experience is from development administration needs for various projects.

Note: Abbreviated resume. Full resume available on request.

8 Foreign Citizens.

No foreign citizens or individuals holding dual citizenship working as a direct employee, contractor, or consultant will be working on or have access to this Phase I or Phase I Option projects.

9 Facilities/Equipment.

The physical facilities to carry out Phase I is just office space since this will be programming, documenting, and image data analysis work. Currently available is 5 PC Windows and Linux based computers for development. One or 2 new faster PC computers being considering for upgrades and maybe a Xeon based test system, however these are general purpose systems that can be used for other work, so it will be a Lightning Fast Data Technology expense.

The facilities meets all environmental laws and regulations of Federal, Washington State, and local Governments for, but not limited to, the following groupings: airborne emissions, waterborne effluents, external radiation levels, outdoor noise, solid and bulk waste disposal practices, and handling and storage of toxic and hazardous materials.

10 Subcontractors/Consultants.

No Subcontractor or Consultants are required for Phase I and Phase I Option.

11 Prior, Current or Pending Support of Similar Proposals or Awards.

No prior, current, or pending support for proposed work.

12 Discretionary Technical Assistance.

No Discretionary Technical Assistance (DTA) required for Phase I and Phase I Option.

Post Proposal Comments

Requested a debriefing, but none was provided.

Have multiple 3.1.1, 3.1.2, 3.2.1 sections. Wonderful Microsoft products shows me we have enhanced features for auto section numbering systems.