Automated Reconfiguration of Mission Assets

From Airforce 17.2 SBIR Solicitation

OSD172-DI3 TITLE: Automated Reconfiguration of Mission Assets

TECHNOLOGY AREA(S): Information Systems

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer, Ms. Gail Nyikon,

OBJECTIVE: Provide a capability that can rapidly and automatically reconfigure protected IT assets (e.g., multi-tier servers) in response to an ongoing cyber-attack.

DESCRIPTION: High value IT assets, such as endpoints, servers, and devices, are under continual attack by well-resourced adversaries. This research will focus on one potential reaction to an ongoing cyber-attack and provide the ability to reconfigure a protected, complex, multi-tiered application. Dynamic reconfiguration can consist of a wide range of actions that can include providing new network addresses for the assets, reconfiguring protection assets, such as firewalls, changing the protocols between components in a multi-tier solution, and changing the cloud infrastructure provider of the mission, even when the underlying infrastructure ecosystem differs across cloud service providers (CSPs). The focus of this research is on reconfiguration in the infrastructure of the application. The goal is to use multiple methods to make the protected asset’s attack surface rapidly unrecognizable, possible moving the application, and forcing the attacker to go back to square one in planning the attack. High degrees of automation are preferred in the solution, minimizing the administrative burden of the capability. Being able to define the logical components and relationships of an N-tier application’s deployment are would be a valuable feature in this research.

No environmental factors required.

PHASE I: This phase will focus on the feasibility and planning for this research effort. The investigators will create an analysis document in the form of a Technical Report, regarding the feasible options for changing the attack surface of an operational application “on the fly”. Several possible options for automated reconfiguration are mentioned in the description above. (e.g., new addresses, component reconfiguration, cloud porting etc.) The investigators will determine use cases to define when dynamic reconfiguration of an N-tier application should occur and what triggers will be used from the larger threat sensing capabilities to invoke the features developed in this research. (This research does not sense when an attack is occurring, but rather, one reaction to the detected attack.) This research should document the best practices in application development that will facilitate rapid reconfiguration functions. The investigators will describe ideal application/workload design options that best match this dynamic reconfiguration environment. Important to this report is the identification of specific control points, within the protected application and its hosting environment components to include compute, storage, and networking.

PHASE II: This phase will focus on the design and implementation of a software-based solution to provide extensive reconfiguration of a mission application’s run time configuration in order to protect the application from any further progression in an attack. The intent is to reconfigure an application’s attack surface, location, and communication methods with only minimal impact to the operation of the asset. The asset may continue to run in a new configuration, which may reside in a different DoD-approved cloud, with new network addresses, with changed security component components, and may communicate internally with different protocols.

One important aspect of reconfiguring a N-tier instantiation is being able to define the logical pattern of the application in terms of application components, such as contemporary servers, communications paths, ports, and protocols, and security components. Once these pattern requirements are known, then logically equivalent substitutions can be made for the purpose of changing the attack surface. For example, if Server 1 communicates with Server 2, using protocol A, then the same effective communication can occur through a different communication path, between different addresses, possibly in different cloud IaaS.

PHASE III DUAL USE APPLICATIONS: This phase will focus on the commercialization of the technology. The solution developed in Phase II will be “productized” for more general use across the government and in the commercial marketplace. Consumer documentation, such as Administration and User’s Guides to the product, will be developed. The investigators will determine the cloud IaaS’s, and guest operating systems that will be supported in the product based on market need, and move to expanding the portability of the product to those environments.

REFERENCES: 1. Cloud Migration Research: A Systematic Review,

1. “…DoD must increase its defensive capabilities to defend DoD networks and defend the nation from sophisticated cyberattacks…” “The DoD research and development community as well as established and emerging private sector partners can provide DoD and the nation with a significant advantage in developing leap-ahead technologies to defend U.S. interests in cyberspace. In addition to supporting current and planned investments, DoD will focus its basic and applied research agenda on developing cyber capabilities to expand the capacity of the CMF and the broader DoD cyber workforce.

2. Donovan, Paula J., et al. "Quantitative Evaluation of Moving Target Technology." Technologies for Homeland Security (HST), 2015 IEEE International Symposium on. IEEE, 2015.

3. Carvalho, Marco, et al. "Command and control requirements for moving-target defense." IEEE Intelligent Systems 27.3 (2012): 79-85.

KEYWORDS: Cyber Defense, cloud migration, application reconfiguration, automation

TPOC-1: David Climek

Phone: 315-330-4123


Submitted Proposal: OSD172-D13-0123

Note: Formatting changed to suite web page layout, page title blocks and disclosure restriction blocks have been removed from text.

1 Identification and Significance of the Problem or Opportunity.

Due to the lack of high quality attach detection and attach suppression methods for the United States Internet, cyber criminals, opposing government organizations, and uniformed hobby hackers are running ramshod of many users, businesses and government entities on the internet. Securing access to some of compromised systems and using these to amplify their ability to attack and infiltrate other computers on the internet. An attack that secures access to a computer in the data center allows a high speed, high power attack of all other system in the data center, using moderate portion of the available data center bandwidth with very little internet bandwidth. Potential solutions can be created, however, large number and variety of invalid solutions are being financed, too low of interest, not getting the right expertise, or lack of resources are preventing the creation and deployment of valid solutions. Some say that insanity is doing the same thing over and over and expecting a different outcome. This project is to look into taking a different approach then the standard computer end point, network deploy and prey methods.

New methods to counter cyber attacks to high value IT assets, IP servers, network components, and devices need to be developed and deployed to minimize the effects of attacks to make cyber attacks substantially less effective, and much, much more costly or almost impossible to effectively engage in. One method of increasing the difficulty and effectiveness of attack is to be able to quickly change the internet access IP address and Port locations that can change the attack surface. Multiple methods can applied to where each method adds a multiplier into the cost of affective attacks making attacks much more costly and ineffective. Such techniques as indirect login for service access, obscure dynamic IP (Internet Protocol) Server location service techniques, network communications contrary rules and arbitrary communications rule variation packet processing, alternative Linux Protocol Stack that works contrary to expected operation, and using non-industry standard alternative program & VM (Virtual Machine) execution management subsystems gives a powerful toolset for obscuring location, obscuring network communications architecture, suppressing attack communications, and automating for rapid changes to an arbitrary access and attack surface.

2 Phase I Technical Objectives.

The Phase I technical objectives are:

  • Research, document, and plan for technology that can support a method of server services relocation and reconfiguration. The research includes 1): IP Server Location Services, 2): High Speed Realtime Communications Access Control, 3): Communications Access Databases, 4): Internet Constellation Program Manager, 5): Program Execution Manager, 6): Alternative Server Protocol Stacks, and 7): Application Writing Techniques that minimize poor communication methods and make use of automated internet access methods.
  • Create some simple prototypes to demonstrate local Ethernet communications using high throughput realtime packet processing with communications rules different than the standard local area network rules, demonstrate obscure IP Service location data tables, and demonstrate indirect user/system login.
  • Make use of the Lightning Fast Database Server product, to provide database table use and database table distribution and ongoing updates to provide data to the control systems to make management, control, and communications flow decisions. This is demonstrate a paradigm shift to a database based control by data content, where the data records can be very quickly changed, and can be used as a foundation for protection, communications management and system management decisions for operations local and spanning the internet. The Lightning Fast Database Server product already exists and is not included in this project, but will be used for effective automated reconfiguration services.

3 Phase I Statement of Work

The work for Phase I is 9 Months in length and will be conducted at Lightning Fast Data Technology Inc. Headquarters (3419 NE 166th Ave, Vancouver, WA 98682).

3.1 Work Plan Outline

3.1.1 Project Scope

The project includes 2 Presentations, Research and Documentation, Prototypes, and Reports. The presentations are for a Kickoff and Mid-Project meetings. Research subjects includes 1): IP Server Location Services, 2): High Speed Realtime Communications Access Control, 3): Communications Access Databases, 4): Internet Constellation Program Manager, 5): Program Execution Manager, 6): Alternative Server Protocol Stacks, and 7): Application Writing Techniques. Prototypes include realtime packet processing with flow based rules, using IP server location database tables, and indirect login windows app and login server support . Reports include monthly and Final report which will include the final versions of all of the research subjects listed.

3.1.2 Task Outline

  1. Preparation of Kickoff Power Point Presentation
  2. Presentation of Kickoff Power Point at Sponsoring OSD Facility
  3. Apply for Secret Clearance for Phase II
  4. Arbitrary IP Server Location Service Study and Document
  5. Arbitrary IP Server Location Service Prototype
  6. Data-Center/Cloud Communications Access Control Study and Doc
  7. Data-Center/Cloud Communications Access Control Prototype
  8. Current Access Database Study and Document
  9. Current Access Login Prototype
  10. Internet Constellation Program Manager Study and Document
  11. Preparation of Mid-Project Power Point Presentation
  12. Presentation of Mid-Project Power Point at Sponsoring OSD Facility
  13. PEM (Program Execution Manager) Study and Document
  14. Server Protocol Stack Change Study and Document
  15. Application Writing Techniques Study and Document
  16. Apply for ITAR
  17. Project Management (Time spread across project)
  18. Phase I Monthly Status and Progress Reports
  19. SF-298 Form
  20. Final Report

3.1.3 Milestone Schedule (maybe adjusted for insertion of Christmas to New Years)

  • Week 3: Kickoff Power Point Document
  • Week 4: Presentation of Kickoff Power Point Document at OSD facility.
  • Week 4: Apply for Secret Clearance
  • Week 6: Month 1 Progress Report
  • Week 6: Preliminary (2/3) of Arbitrary IP Server Location Service Study and Document
  • Week 7: Arbitrary IP Server Location Service Prototype
  • Week 11: Month 2 Progress Report
  • Week 12 Preliminary (2/3) Data-Center/Cloud Com Access Control Study and Document.
  • Week 14: Data-Center/Cloud Communications Access Control Prototype
  • Week 15: Month 3 Progress Report
  • Week 18: Preliminary (2/3) of Current Access Database Document Study and Document.
  • Week 19: Month 4 Progress Report
  • Week 21: Current Access Login Prototype
  • Week 23: Preliminary (2/3) of Constellation Program Manager Document Study and Document.
  • Week 24: Mid Project Power Point Presentation
  • Week 24: Presentation of Mid Project Power Point at Sponsoring OSD Facility
  • Week 24: Month 5 Progress Report
  • Week 27: Preliminary (2/3) of PEM (Program Execution Manager) Study and Document.
  • Week 28: Month 6 Progress Report
  • Week 30: Preliminary (2/3) of Server Protocol Stack Change Study and Document.
  • Week 31: Preliminary (2/3) of Application Writing Techniques Study and Document.
  • Week 32: Finish Data Arbitrary IP Server Location Service Doc and Integrate into Final Report.
  • Week 32: Month 7 Progress Report
  • Week 33: Finish Data-Center/Cloud Com Access Control Doc and Integrate into Final Report.
  • Week 34: Finish Current Access Database Document and Integrate into Final Report.
  • Week 35: Finish Constellation Program Manager Document and Integrate into Final Report.
  • Week 36: Finish PEM (Program Execution Manager) Document and Integrate into Final Report.
  • Week 37: Finish Server Protocol Stack Change Document and Integrate into Final Report.
  • Week 37: Month 8 Progress Report
  • Week 38: Finish Application Writing Techniques Document and Integrate into Final Report.
  • Week 38: Apply for ITAR Registration.
  • Week 39: SF-289 Form.
  • Week 39: Final Report, Project Completed.

3.1.4 Deliverables

  • Week 4: Kickoff Power Point Document
  • Week 6: Month 1 Progress Report
  • Week 6: Preliminary (2/3) of Arbitrary IP Server Location Service Study and Document
  • Week 11: Month 2 Progress Report
  • Week 12 Preliminary (2/3) Data-Center/Cloud Com Access Control Study and Document.
  • Week 15: Month 3 Progress Report
  • Week 18: Preliminary (2/3) of Current Access Database Document Study and Document.
  • Week 19: Month 4 Progress Report
  • Week 23: Preliminary (2/3) of Constellation Program Manager Document Study and Document.
  • Week 24: Mid Project Power Point Presentation
  • Week 24: Month 5 Progress Report
  • Week 27: Preliminary (2/3) of PEM (Program Execution Manager) Study and Document.
  • Week 28: Month 6 Progress Report
  • Week 30: Preliminary (2/3) of Server Protocol Stack Change Study and Document.
  • Week 31: Preliminary (2/3) /3 of Application Writing Techniques Study and Document.
  • Week 32: Month 7 Progress Report
  • Week 37: Month 8 Progress Report
  • Week 38: Git repositories for all SW written specifically for the project. However will not have Database and other general library support code, in essence this will just be sample code.
  • Week 39: SF-289 Form.
  • Week 39: Final Report, Project Completed.

3.1.5 Kickoff and Mid Project Meeting

  • Week 4: Presentation of Kickoff Power Point Document at sponsoring OSD facility.

3.1.6 Progress Reports

  • Week 6: Month 1 Progress Report
  • Week 11: Month 2 Progress Report
  • Week 15: Month 3 Progress Report
  • Week 19: Month 4 Progress Report
  • Week 24: Month 5 Progress Report
  • Week 28: Month 6 Progress Report
  • Week 32: Month 7 Progress Report
  • Week 37: Month 8 Progress Report

3.1.7 Mid Project Technical Review Meeting within 6 months

  • Week 24: Presentation of Mid Project Power Point at sponsoring OSD facility

3.1.8 Final Report with SF-289

  • Week 39: SF-289 Form.
  • Week 39: Final Report, Project Completed.

3.2 Phase I Tasks

3.2.1 Task 1: Preparation of Kickoff Power Point Presentation

Create a Power Point presentation of project plans to present at the Sponsoring OSD Facility.

3.2.2 Task 2: Presentation of Kickoff Power Point at Sponsoring OSD Facility

Travel to sponsoring OSD facility within first month, meet and present Kickoff Power Point document and engage in project related discussions.

3.2.3 Task 3: Apply for Secret Clearances for Phase II

The SBIR contract profit structure is not conducive of handling the high cost ITAR and classified personnel cost structures desired by DoD support. This can only mean that the costs of these need to be built into the cost structure of the proposal. Since a clearance can be long lead, this is applied for early in the project. Note: The principle Investigator, Mike Polehn, had a secret clearance when previously working for Boeing Aerospace, which should have a lower cost than having no previous clearance. This is to apply for Secret Clearance for Mike Polehn per Contract Officers request. If Contract Officer does not authorize and make the request, no application will be done.

3.2.4 Task 4: Arbitrary IP Server Location Service Study and Document

Note: Applications are generally defined as end point programs used by a user, however all network accessible programs on the internet are IP (Internet Protocol) Servers so will be referred to as IP Servers. A physical server can often support multiple IP Servers.

Communication Through Location Servers

Figure 1: Communication Established Through Location Servers

Most systems on the internet are located by using a DNS system, which was created using and arbitrarily chosen protocol to relate a text name string to an IP address. This is really just a public database system, which is to tell other systems where another computer's internet address by arbitrary name. However a DNS system is not required find an IP address of another system, which means that other methods to report the location of another system is possible, but it does help to have an underlying and powerful database system.

Service reliability can be done with local high reliability and redundant hardware at each location, however high reliability can also be achieved by running multiple servers at multiple internet physical locations. Three service locations is represented since two can easy agree that the 3rd location is unresponsive and kick off a new copy at a new cloud IP and Port location. The newly created service starts, registers with the location database server, gets the current database blocks of the associated database and contacts the other servers for coordination. If there are no other associated server locations currently in the data base, then it knows to start other copies at an arbitrary internet location which can use an arbitrary port. If a system is being attacked, the server can stop responding and/or disconnect IP connections and it's users can reconnect to another system found through the location database records and the server set can start another copy and deregister the failed/shutdown server. The users of the service get a copy of the table and randomly pick a system to access. This type of operation results in dynamic IP Address and Port configuration, which can be used to change the attack surface in a somewhat automatic way.

A Lightning Fast Database Server, binary element database product, has been created that can run on an arbitrary number of servers, serve an arbitrary huge number of users, have an arbitrary number of database tables, create and destroy databases as needed, and a user database can keep local copies and get updates periodically and/or as needed. This database server is substantially different than a traditional database system, please don't make assumptions about how it works. The database end user system can be integrated into server applications, local program managers, and communications management systems as needed.

On the surface it appears that the location database server can be a weak point for attack, however an arbitrary number of database servers can be ran, placed at arbitrary IP and Port locations (attack surfaces can be changed), have arbitrary number of databases and IDs (names, can only get access and updates if it knows the database's arbitrary name), only serve databases to users/endpoints on the access lists, can be encrypted and decrypted only at the usage endpoints (encryption keys can be endpoint secrets), access is controlled by known endpoint and registered server information, honey pot databases can be quickly created, etc. There are a large number of methods to help on security, Lightning Fast Database Server does not currently support all of these features, but features can be added over time as the need becomes desirable.

However the Lightning Fast Database Server was designed for a different type of use and some work maybe needed to optimize this for this task and possibly add new features that help support dynamic attack surface configuration and security services, but this will be a Phase II project.

This task is to research and analyze how a database system is can be used to effectively coordinate current communications configuration and changes for dynamic IP server access surface management. A list of desirable features and undesirable/weak features will be created, analyzed, and ordered by importance for support of effective, reliable and difficult to attack location database services. Finished by analyzing the changes, plans, and the Phase II work needed to add these features to effectively use the Lightning Fast Database Server.

3.2.5 Task 5: Arbitrary IP Server Location Service Prototype

This task is to create a simple IP Server program that will interact with the database service, where it will get an arbitrary name/id location table (base database table must have been already created on server with it's associated meta data file that defines the record format and content) from a database server located at an arbitrary IP address and server Port as set by application's configuration data. This first level is to find the Database IP Server(s) that handle the target location tables, by table name and or id. The service database table might be handled by the same Database IP Server or a different Database IP Server.

Using database records for this table, the application will look up the named/id database server location(s) for target table name/id, for registering as service. Contact the database server at the IP address and port location, become a user of that table. It then creates a service availability record by adding a database record with required information to the table, and through the update process will get sent to the database server and put the available service name/id record in the associated table. During database updates for that table, it distributes the new record to the table users during their update cycles. Programs needing to access the services, can get database copies of the location table(s) and use the table(s) to find the location of the desired IP Server service(s). Another simple prototype program will get the table and use it to find the IP Server, and do a TCP/IP communications connect with the registered IP Server.

These are simple prototypes, with no table access list processing for the tables, no encryption, use single database server, etc., just to establish the use of database service to register and find IP Servers of target services.

3.2.6 Task 6: Data-Center/Cloud Communications Access Control Study and Doc

The communications between systems are governed by rule sets that allow data to be moved as expected between systems. These rules or laws must be obeyed to move data across the internet to the particular end point customers.

However, the end customer, whether individual, business government, service provider of communications services, cloud provider, or data center, tend to use equipment that uses predefined communications methods and protocol rules which is very convenient. These methods conveniently convert IP address, IP port address, and protocols to an address that has absolutely nothing to do with the IP addressing (usually Ethernet) at the physical layer 1 level.

Cables are grabbed, boxes that the cables be plugged in are combined, systems are configured, and you have a local network. Unfortunately, these rules of plug and play configuration and methodology is also highly usable by attackers and are extensively exploited. The rules of how data communications is being handled within a network does not need to follow the conventional communications rules. In telecommunications equipment, which has to handle the communications of many end point IP addresses and customers through single communications paths (excluding the redundant paths which is required for high reliability), does not use the standard rules of end point networks since it is handling communications with different IP addresses, ports, and protocols in a method as needed to complete the communication paths.

By the same methods, any IP address tuple (IP address, port ID, protocol) owned by the endpoint can be arbitrary assigned to access any system in the endpoint network. This is the method used by load balancers to use multiple systems to support a single IP Address and Server Port location (such as a high use web server). Address can be translated and flow of packets can be dynamically defined and routed by other rules, such as rule of communications flows where a communication in and output tuple sets have be established by a complex operation, and packets not listed in established flows are discarded. In addition, if the IP address of the incoming communications is not listed in an access database, these can be quickly discarded and also provides information of the IP address that do not have any business trying to access computers at this address, thus give IP addresses that should be monitored for improper and/or compromised internet access. Also each allowable access, as an access tuple, can be different for each particular user/source-IP as loaded in the access database, as obtained from the location servers, in which invalid access combinations allows collection of more attack data which can be used to help find and resolve compromised high value user's end systems and or detect man in the middle attracts.

 Cloud Architecture with Arbitrary IP Addressing

Figure 2: Cloud Architecture that Supports Arbitrary IP Addressing

Shown in Figure 2: Cloud Architecture that Supports Arbitrary IP Addressing, is a possible arrangement that will be more secure than ordinary cloud system arrangements. Other equally valid arrangements are possible. In this arrangement, the internet accesses for the general computers go through a realtime packet processing program on a single socket IA system (double socket systems are difficult to do correct NUMA node setup which can result in very poor realtime performance degradation from CPU to CPU communications thrashing) and has 12 to 16 10 Gb Ethernet interfaces. Eight of the 10 GbE interfaces goes directly to an interface on each of the general purpose servers. The realtime code would generally use a CPU core per 10 GbE, in this case 10 total since 8 go to the servers, 1 goes to the internet, and 1 goes to the high speed data center connection, and have 2 to 4 cores for general system and control operations giving a 12 to 14 core system CPU. The program manager would manage the servers through control bus, but could have separate interfaces for each system to prevent cross server attacks. It would monitor, control, run programs, and start VMs systems and detect attacks on the control bus originating from any sever on the control bus. This shows redundant packet processing and program management systems for higher local reliability, but the doubled packet processing and control is really optional and is to prevent all servers in the set from all going down together due to failure of a single path component. No 10 GbE interfaces of the general server customer data goes through standard switches, to prevent standard network attack capability. However, this cannot prevent VM to VM & Host attacks residing on the server since this uses a well defined network structure in the Host system, as well as through any SR-IOV interfaces of the 10 GbE interface.

The attack surface area of a computer system is primary through it's operating system. A VM seems to be protected since you cannot directly see all the host memory or see memory in other VMs. However, a host system can see all memory, including VM memory, and some interfaces can be monitored. As I was trying figure out how to set up a Openstack network that was outside the default setup after having logged on to a system that was under Openstack control as a standard root user, I realized that programs like Openstack were just a specialized operating environment layered on top of the conventional operating system, which is usually Linux, uses only a subset of commands, and any changes not exactly following Openstack conventions will just cause the system not to operate as per Openstack requirements. Any standard operating system command could override any setup of Openstack. Programs like Openstack just give the overall system a new set of cyber attack surfaces in addition to each of the attack surfaces of each operating system being used, with multiplied difficulty of detecting an obscure host system manipulation. Even though a VM cannot see into a host, in some cases a VM could run intensive attacks on the host, and these may not always be obvious other than using lots of compute resources.

The NFV (Network Function Virtualization) system that Telcos are in the process of attempting to adapt also suffer from the same extra control layers which give extra attack surfaces in addition to the individual server attack surfaces. The NFV functionally of running realtime packet processing can be achieved using conventional Linux systems. However the goal of automated management NFV systems with the requirement of compatibly to a particular automated management system, really makes the system a vertical integration, which is opposed to the original goal of horizontal open source type compatibility, initially sought during NFV definition phase.

The problem about these types of super layered systems is if getting access to the operating environment, a very thorough attacks and analysis of a program or VM contents and operation can be done, and is probably undetectable from the VM user's prospective. However, realtime packet processing, which is one type of NFV, if dropping packets at 10 MPPS (Million Packets Per Second) when it is supposed to be handle up to 20 MPPS will might raise a red flag to check the system closely to determine the cause of the performance degradation.

Back to Figure 2: Cloud Architecture that Supports Arbitrary IP Addressing. The system service providers are in the business of providing support systems and would provide a unique configuration rather than a low budget cookie cut Openstack system, if the money was right.

The internet input/output real-time packet processing program is a high speed packet processing and virtual switching program which is based on DPDK libraries. Some of my DPDK work with virtual switching allowed sustained bidirectional zero loss switching greater than 22 MPPS (Millions of Packets Per Second) at 64 bytes. At 128 byte packets, < 16 MPPS is required for 100% 10 GBE bidirectional rate. Packet switching are based on processing flows and no packets are passed without an established flow. When a packet comes in without an associated flow, then a decision needs to be made to set up the flow in the flow table. This is where the database model becomes very powerful. If an established IP address of a user with current credentials (their machine contacted a system that loaded their access rights of the IP address in a table, that a copy is being kept updated in the realtime program) to use a resource (IP address & Port) the resource service is available locally (the local resource database table has an entry for the service) then it can establish a flow connection. The database entry can be setup so that it lists the service, address and port, the incoming flow is translated to the address and actual port of the service, which could be different then the target IP address and Port of the incoming flow. The virtual switch establishes the connection and address translation and routes the packet to the associated server, but since this is probably a TCP connection, follows the TCP port translation and establishes bidirectional translation of the connection at the final TCP/IP port on the IP Server.

The establishment of flows does take longer than the switching of a packet, which is where denial of use attacks can be used, however can have limited effect by some techniques. Using hash tables of the limited number of database entries, these databases can do very quick lookups and make flow decisions made quickly. If the connection decision fails, the flow is setup to delete packets, which can be deleted at full incoming packet rates. A hash table of incoming IP addresses can be maintained. If a condition of sustained high rate connection requests is occurring (an attack), the IP address of which addresses were accepted and which failed, and which are new can be used to sort the priority of the incoming flow establishment processing and which to immediately delete low priority until new connection rate has reduced. All IP sources can be logged and provides data to what IP addresses are being used for attacks. I was doing tests with a Spirent (test equipment) that the flows occur all at once (first 1000 packet each had a different IP address) and had surprising little impact on the zero loss maximum packet rate when hitting with 1000 simultaneous flows, which indicated the flow establishment processing was working very well, however this is dependent on the flow establish decision rate which would be different for this system, but methods are available to make this robust and handle large new connection rates.

Since this is based on incoming IP address and target Port, each credential can have its own unique address and port for the service, so if an IP address tries to use ports that were defined for other users (not considered in preliminary prototype), it gives the information that there is an attack going on and which associated system might be involved. Data flow logging can also occur for the failed connections, which is desirable for communications failure debugging and logging attack attempts. Flow logging is highly desirable, but don't have the room for detailed discussion.

For IP Server outgoing client communications connections to other IP Servers, some communications to other components is needed. In the same way, the privileges and access location allowances can be established on database tables. Since communications often used in a compromised system to report back to the attacker, these unknown/unregistered types can be blocked, and the IP location to where the communications attempts are being made, can provide information about the current attack systems on the internet.

In essence, the network communications rules between systems can be defined as desired by using nonconventional networks. Using servers running high speed packet processing communications code, which own the communications physical Ethernet interfaces, the realtime task in either the form of host tasks or a VM, that only does communications processing and does not allow any control communications over the data interfaces, can be resilient to attacks. These type of programs can be written using libraries like the DPDK. Since these system are separately controlled and have separate physical networks then the controlled general purpose servers, unexpected communications can be logged and suppressed. These systems can be programmed and used to do address translation (Address & Port), Ethernet address routing, load balancing, packet switching, non-conventional firewall processing, white list access only, routing of suspect communications to honey pots, translate between IPV4 and IPV6 address, etc. as needed.

This task is to create a document which discusses methods of using a real-time packet processing programs to dynamically control the internet access IP Addresses and IP Ports to high value IT server assets.

3.2.7 Task 7: Data-Center/Cloud Communications Access Control Prototype

Shown in Figure 2: Cloud Architecture that Supports Arbitrary IP Addressing is the real-time internet to compute system communications processing. This task is to create a DPDK high speed packet processing demonstration program running on a Linux CentOS 7 server, that will look up communication IP and Port credentials from a database table using hash tables (for high speed lookup) and look up location of service from another database table using hash tables and translate the flows to the target IP and Port Address and switch the communications to correction Ethernet interface. The outward communications will be translated based on the incoming flow, except for the created port ID for the TCP/IP communications establishment port, will be converted, and input flow be allowed for the newly established port. This is not to create a full internet constellation node, but to put together a minimal set of systems to show the realtime communications packet processing is functioning with an alternative communications network rule set with restricted access.

For this initial version, the database entries will be loaded by hand, sent to the database server. The local standard TCP/IP client database subsystem will contact the database server for updating the local copies of the database tables. Hash table lookup code for doing searches will be created for high speed search cases when a moderate number of credentials need to be quickly searched.

This will be a basic/preliminary high speed realtime packet processing DPDK based program, without all features described in the R&D section, nor the optimization development work cycles, often needed for the very high quality and high performance realtime packet processing programs.

3.2.8 Task 8: Current Access Database Study and Document

Indirectly informing a controlled set of systems when a computer might be accessing one or more of a restricted set of systems is better than allowing access 24 hours a day. This can simply be done by running a program that contacts an access server located at a confidential IP address, confidential port ID, and confidential database table id/name, and "logging in" which can be used create entries in the associated access control database table(s) used by the communications systems controlling access to the IP servers. This may also include the services that the user wants to access by checking boxes of a list, which can be changed any time, when the user wants to change them. During the access period, this program occasionally contacts the server to keep access database record current and when the user "logs out", it updates the access database tables to remove access privileges table entries. When using the communications management, described later in this document, removal can trigger to discard of any communication flows, terminating all associated communications. If system is turned off or shutdown, the access privileges are terminated after a moderate period. If the system/user is not logged in, the restricted computers ignores all attempts at communications by discarding any communications packets. These attempts can be logged and might provide information about current attacker(s).

The login could include system confidential information such as it's current IP address, CPU serial number(s), license ID of this access program, arbitrary information known by both sides of communications but should not be known by others, etc. This information could help with security in the login process and could also help during access to other IP servers, but IP client and IP server programs would need to written to include some of the information checks during initial connection to take advantage of the more detailed confidential data outside of access control table entries for a given user IP address.

This task is to study the methods for login access management, research and define a client and server architecture(s), and write a report.

3.2.9 Task 10: Current Access Login Prototype

This is to create a simple Window's GUI demonstration program that uses the Lightning Fast Database Server and creates an Access Enable data record in a Database for the source IP address that is being by the Data-Center/Cloud Communications Access Control Prototype.

When the user "logs in" the computer, through another program, will be able to get network access to a registered service on the server.

The database table updates are through an occasional update operation, could be set for 5 minutes, that contacts the database server and downloads any new, changed, or deleted records for the database tables that is handled by that server. When new communications comes in that is not registered, and the credentials are not present, an immediate database update for that table can be done, while the communications remains blocked. If the update shows valid credentials, the communications can proceed. When logging out, the credentials would be removed from the servers table, however the communications control table would not change until its next table update. After update, any deleted credentials can be used to terminate any established communications that are tied to the credentials, but the general communications order should be to terminate communications and logout rather than a termination, this may indicate a possible attack.

For a Lightning Fast Database Server with low user count, the users could remain connected to the server and when any tables has a record update that is being used by the client programs, these table updates can be immediately sent to the client program to update their tables. However this is not how the database server works at this time, but support for this can be added in Phase II.

3.2.10 Task 10: Constellation Program Manager Study and Document

The larger and more complex attack surfaces of program management of cloud system like OpenStack and NFV as compared to just the OS systems makes it very costly to research, especially since this is still evolving across multiple environments. Also the costly changes to open source projects that is controlled by other people is obtained at extremely high cost (observed Intel's attempts to get changes needed to run realtime DPDK programs on OpenStack with correct resources, NUMA node management, and huge memory allocation) and changes will not always be accepted. It is easy to configure and use mainline Linux OS like CentOS 7 to run realtime programs correctly, however OpenStack probably still does not have the correct support, and currently NFV systems requires a complex, poorly defined, and costly setup well beyond that any small companies can afford. In theory the NFV should be able to run DPDK programs that can run on plain Linux without extensive integration efforts. This really a requirement to have an open horizontal system, but most NFV control systems I am aware of has a highly vertical element. The only practical thing is to run a CentOS 7 system based stack of servers and running an Program Execution Manger IP Server which use basic programming and scripts to control and run applications and/or VMs on these systems for dynamic control of programs across a internet constellation (arbitrary group of cloud servers) where each might have diverse control systems.

The CPM (Constellation Program Manager) is to manage a local internet runtime installation and coordinate resources and program execution at a local network level. Higher level control programs, using location services to find these managers, through a constellation database tables, can request the execution of programs on available compute resources each compute resource executing a PEM (Program Execution Manager). It can be in form of a real time program as shown earlier or can use the realtime communications system(s) to manage the communications.

This will study and document the program and VM management features needed for managing programs for dynamic reconfiguration of IP Servers (applications) for rapid reconfiguration and how this might be deployed on a stack of Linux servers to manage their compute, storage, and network resources. This will discuss how current runtime program execution locations can be logged on location databases and can be used for dynamic runtime program to program communications mapping. Old database data entries, kept in the database server (until purposely cleared), can be used to determine runtime application and IP Server relationships, which can be used to help determine requirements for future requirements and deployment of the IP Server application set.

3.2.11 Task 11: Preparation of Mid Project Power Point Presentation

Create a Power Point presentation of mid-project results for presentation at the sponsoring OSD facility.

3.2.12 Task 12: Presentation of Mid Project Power Point at Sponsoring OSD Facility

Travel to Sponsoring OSD Facility, meet and present Mid Project Results Power Point document, discussion of prototype operation and results, show remote network prototype demonstrations, and engage in project related discussions.

3.2.13 Task 13: PEM (Program Execution Manager) Study and Document

Each runtime end point system, of either a full server or a VM, will be running a PEM (Program Execution Manager) IP Client that will register with the CPM (Constellation Program Manager). This control program is not for a particular data center, but help in doing program execution control across an internet constellation, which is an arbitrary series of data centers and/or end point networks scattered across the internet, which is just an artificial construct defined by a database table. This runs programs either by a simple known command line arguments or uses a startup script file and can do some runtime monitoring and logging of the execution environment.

One function of the PEM is to provide a list of available resources that are dedicated to system that is available to be used by the program management system. This is information decided by the system administrator, since the full system might not be fully dedicated. The data is placed in a information file and has data about the resources available for use, such as number of cores, memory, communications bandwidth, etc. For a full dedicated server and example of a system with 2 CPUs each having 32 hyper-threaded cores, 256 GB of memory each and 4 dedicated 10 GbE, these value of 64 cores, 512 GB Memory, 4 10 GbE, etc. will be listed. For a server used to run 500 VMs on the same system, the VM PEM would be 64/500 cores, 512 GB/500, and 4/500 10 GbE. When the system is started, the PEM is started up, the PEM IP client, make a connection to the Manager CPM IP Server, and it registers it's available resources to the CPM, which the CPM registers the resources in a database entry for the given service location resource table. For a given location, all available controlled resources are registered on a resource database table.

When a new program needs to be executed, each program has a list of resource it's needs during expected normal operation as determined by experience and system administrator usage expectations. When a program is started, these values are used to keep an "In Use" entry in the service location resource table allowing available resources to be easily calculated by subtracting the use entries from the available entries. Since the arbitrary constellation database table can be available across the internet, a control program, competing with other control programs, wanting to find a place to run a program, can search it's local copies of the service location resource tables to find a resource to execute a program. This control program can be written to take into account preferred locations, probability of attacks, etc. is only known to the control program. It then contacts the selected CPM, requests execution resources, and if still available, dedicates the resources, locates and gets a program copy from the repository manager, and executes the program, which registers the resource usage in the table. If the resource is not available when contacting the CPM, the control program can continue searching other service location resources in the table. General purpose control programs can find and locate resources across the constellation and execute programs at arbitrary convenient internet locations from simple requests.

This task is to study, determine variations, and document a PEM architecture suitable for executing programs in internet constellations with varying number and size of the cloud systems.

3.2.14 Task 14: Server Protocol Stack Change Study and Document

It is convent to use the default communications protocol stack of a system. However, systems like Linux allow new operating system code to replace the original code. A new communications protocols stack with different characteristics, including detection of unexpected/unauthorized communications in or out of the system can be written. During system initialization, the IP protocol stack can be changed. The attackers of the system would expect the system to act a certain way, would be setting alarms. For example, the servers can also be using a new set of system socket communications system service subroutines leaving any standard sockets call to the normal system service subroutines connected to an alarm system and/or code checking that the current task running is on the list of allowed system programs to use the standard socket calls.

With applications using packages like DPDK, the application can fully control the communications protocol of the system since it can control all contents of the communications.

This task is to study and document techniques and protocol stack architectures for changing the systems communications protocol stack and possible new protocol architectures that would help in automated communications reconfiguration and also in the detection and suppression of communications of attacking software that is executing on a Linux server. This will also include features that can be added for dynamic protocol communications endpoint changing, such as supporting port subsets of multiple network IP addresses and dynamic altering of these sets.

3.2.15 Task 15: Application Writing Techniques Study and Document

This work is to document potential techniques of making a server be more resilient to cyber attacks, how to add the newly developed capabilities to applications, and how to support the new reconfigurable access surfaces. There is an infinite number of bad methods and substantially less number of good programming methods to combat cyber attacks. These include designing good protocols, the server does not respond on a TCP/IP connection which might provide classification of the server, until valid protocol and possibly valid credentials are presented. To help on automation, the IP Server can startup, obtain a usable server access port and then report the access IP address and port to a systems with an underlying database that can be used to supply location information to programs needing to use the services.

Communications protocols are the programmed method for IP server communications. There are good protocols and poorly done protocols. Often multiple protocols for different features will coexist and execution mixed together without interfering with each other. For example, data blocks could be intermixed with error logging information by having block types such as data and error log block types. With the internet, full bidirectional communications applications are somewhat easy to write, with good programming separate protocol processing can be kept in separate program files. Each protocol should be well defined and robust, where any error, like wrong block size, etc. can be easily detected, errors logged and communications terminated. By using byte packing of structures and data, and using a well defined and limited variations to the format, errors like buffer overruns, that can be used to attack a systems, can be prevented. Poor protocols that give high degree of flexibility, such as variable text string based, like at rest protocols, are much more difficult to fully test and much more easily attacked. Having multiple communications protocols for the same communications function can be done, but has to be done by the application writer and is not a function of program execution or system level configuration.

A person cannot just slop some communications code together and expect to have a resilient and unhackable communications protocol. Things like encryption can be changed on the fly at well defined communications points. This study will compile a list of techniques for good protocol design and dynamic reconfiguration methods and also a list techniques, like "at rest" protocols, that would be poor to use in communications protocol design.

3.2.16 Task 16: Apply for ITAR for Phase II

The SBIR contract profit structure is not conducive of handling the high cost ITAR and classified personnel cost structures desired by DoD support. Apply for ITAR license. This is to apply for ITAR per Contract Officers request. If Contract Officer does not authorize and make the request, no ITAR license will be applied for.

3.2.17 Task 17: Project Management

General project support and management. Work scheduling, planning, and progress tracking and review. SBIR, government contracting compliance review and planning. Monthly review and reanalysis of cost estimate for project.

3.2.18 Task 18: Monthly Status and Progress Reports

Status and Progress Reports document the status of overall project, the project's objectives for the month, the progress of each task, results obtained, and any concerns. Provided within 15 days after the completion of each month, but excludes the last month which is included in the Final Report.

3.2.19 Task 19: SF-298 Form

Complete SF-298 Form and Submit.

3.2.20 Task 20: Phase I Final Report

Contains detailed information for project objectives, work performed, results obtained, and final copy of each study component. Provided within 30 days of Phase I completion.

4 Related Work.

Have created a high performance database system where the servers manage updates to keep a large number of users/copies of database tables highly coordinated and updates ordered, but data is utilized at the user end points. Data is changed at the endpoints through either automation for generating, changing, and deletion and/or using GUI (currently just Windows) database systems. The GUI can also be used to review database table changes done by automation and other inputs. Also see section on Relevant Experience of the PI.

5 Relationship with Future Research or Research and Development.

(a) The anticipated results of Phase I is the creation of technical reports, as portions of the final report, in support of automated reconfiguration of protected compute assets by 1): Indirect logon for service access, 2): obscure dynamic IP server location service techniques, 3): network communications contrary rules and arbitrary rule communications rule variation packet processing, 4): basic program & VM execution management control for a stack of general purpose CentOS 7 servers, 6): alternative Linux Protocol Stack that works contrary to expected operation, 7): and programming methods for writing IP server programs to support dynamic location and execution environment. Software prototype software will include basic high speed DPDK realtime network switch program running on CentOS 7 for communications to general purpose computer stack, that will control flows through database access control lists and service location lists, a simple window's access log control program adding and removing user in control list database table to demonstrate dynamic access to IP server.

(b) The significance of the Phase I objective is set the options and ground work to define a robust and reconfigurable execution environment that will allow quickly reconfigurable IP Server environment and reconfigurable internet network constellations that can be quickly created, destroyed, and internet constellation cloud regions quickly removed and/or deployed to other internet constellations, where the constellations support deployment of multi-tier runtime environment with quickly changeable attack surface. b . This creates a foundation for Phase II, where these system definitions can be refined and all the programs created to get a fully featured internet constellation system that can quickly change access and execute locations with the use of databases to make an effective IP Server reconfigure and support of automated management.

(c): Regarding clearances, certifications, and approvals, no foreign nationals or other restricted persons will be working on this project, and currently Lighting Fast Data Technologies Inc. has not employed anyone that falls under the rules. The principle investigator, Mike Polehn has held a secret clearance in the past while working at Boeing Aerospace. Lighting Fast Data Technologies Inc. has had ITAR licensing in the past.

6 Commercialization Strategy

This effort will add advanced capabilities to the Lightning Fast Database Server for managing and replicating database tables across multiple/many programs/locations in application programs such as a realtime packet processing programs, IP server programs for location services, and applications such access login programs allowing complex system management through use of data entries in various database tables. This may also result in the adding of additional security features and improvements to make it more robust and resilient in a hostile internet environment.

This will develop and show the capably of Lightning Fast Data Technology Inc. for delivery of realtime packet processing for dynamic runtime controlled network communications that allows for access control white lists, flow management, flow logging, as well as put in place other future capabilities such as multi-system attack IP address no-access list (IP type firewall). The DPDK program(s) should be able to run in an NFV Telco environment, however this greatly depends on the NFV control systems/methods and policies of the Telco installations, so this cannot determined until attempted with a cooperative NFV system owner.

This also helps puts in the place components for an future development of attack control firewall, that when an attack is detected from an IP address, this address can be put on a known attack IP source database list, where the users, businesses, and governments can get a firewall service to suppress communications from known attack IP addresses. The owners of the IP addresses would need to clean-up/fix their systems before being removed from the attack list to re-allow full access on the internet. This will greatly reduce attack capability and will cause detection of, and system cleanup across the internet. A person can have all the attack SW on their system, but use of their system is used in unauthorized attacks, the IP addresses of the systems will get quickly suppressed.

7 Key Personnel.


Oregon State University, BS in Electrical and Computer Engineering, 1983


The 6 most recent years at Intel, worked on Ethernet related network projects including writing, optimization, and tuning of high speed realtime packet processing. Work included DPDK coding and optimizing high speed vSwitches (Virtual Packet Switches), packet processing code, and 10 GbE network device driver code. Did extensive zero loss performance testing using a Spirent Network Test system. Tests include switches that was switching between 2 to 8 interfaces owned by one CPU NUMA node (single CPU socket performance on server with 2 installed CPUs). Did tests with multiple separate realtime packet processing programs that ran from 1 to 12 realtime DPDK packet processing programs running on a single server to determine how the peak zero loss performance changed as system was loaded with multiple high throughput realtime loads. Also analyzed network card input/output system performance characteristics as high volume (up to 100% bandwidth) network traffic was put through 2 to 24 10 GbE interfaces on one IA server. Provided education to coworkers and outside interests (AT&T) for properly setting up, and correctly managing, and running realtime packet processing tasks on the Linux host and also as running realtime packet processing tasks in VMs with better than 98% of the performance with network interfaces owned by a Linux VM, as running directly on the host.

Analyzed and measured execution clock cycles of small blocks of code up and down the Linux IP protocol stack. This included network card device driver to the IP socket application and created a large flow chart with all of the measurements for each of the flow chart blocks, for 3 generations of IA CPUs, in support of improving IP protocol stack performance.


Senior computer development engineer with over 30 years experience for Digital Signal Processor (DSP) development, Device Driver development, Infrared Sensor digital signal processing and sensor analysis, network SW development, and embedded development. Have both SW (primary C/C++) and HW development experience for the full development cycle of definition, document, design, development, debug, and test. Combining HW, SW, and analytical experience provides superior computer engineering capabilities over any of these skills by themselves. Flexible, self-directed, and an independent thinker, capable of doing very complex work with little or no supervision.

Intel: Created programs that utilized Xeon CPUs to do high speed realtime network packet processing on host or in VMs, extensive network performance characterization, Telco communications, PBX interfaces, audio subsystems, video conferencing, embedded controllers, device drivers, BIOS work, and Linux, NetBSD, Windows software and driver development work. Detailed Linux communications IP protocol stack section CPU clock usage measurements and detailed block diagrams for 3 different Xeon CPU generations and experiments to improve protocol stack performance, including calling into the device driver from socket queue code when the socket RX queue is empty to push any new network device packets through protocol stack to the socket RX queue.

Northrop Grumman, L3 Communications, Sensors Unlimited, Boeing: Airborne infrared sensor R&D, sensor data analysis, algorithm development, signal processor HW development, Window's based test and analysis programs, night vision goggle and scope test programs, data and performance studies, infrared camera tuning and issue resolution and operational performance improvements, and FPA sensor chip test SW, analysis, and doc data improvements.

Diamond Multimedia, RadiSys, Oresis Communications, Racal Data Communications, Acers Gaming, RedcellX, Columbia Sportswear, Advanced Technology Labs: Spec communications modems, telecom switching equipment, modem data pump code, BIOS code, embedded firmware, hardware design, device drivers, ultrasound equipment.

Note: Very Abbreviated resume. Full resume available on request.

8 Foreign Citizens.

No foreign citizens or individuals holding dual citizenship working as a direct employee, contractor, or consultant will be working on or have access to this Phase I projects.

9 Facilities/Equipment.

The physical facilities to carry out Phase I is just some computers and office space since this will be programming, documenting, and test work. Currently available is 5 PC Windows and Linux based computers for development. Have 1 i7 CentOS system, may build another i7 PC and will need to purchase some 10 GbE interfaces for project, but all will be capital equipment.

The facilities meets all environmental laws and regulations of Federal, Washington State, and local Governments for, but not limited to, the following groupings: airborne emissions, waterborne effluents, external radiation levels, outdoor noise, solid and bulk waste disposal practices, and handling and storage of toxic and hazardous materials.

10 Subcontractors/Consultants.

No Subcontractor or Consultants are required for Phase I.

11 Prior, Current or Pending Support of Similar Proposals or Awards.

No prior, current, or pending support received for proposed work.

12 Discretionary Technical Assistance.

No Discretionary Technical Assistance (DTA) required for Phase I.

13 Project Specific Data and Cost Volume Itemized Listing.

The preliminary by week schedule is as follows with room for time of year adjustment. However will be adjusted as needed to support holiday's, Christmas to New Years non-work time, and maybe intern's "finals" week adjustments, but will all remain within 40 hours a week and will be complete by end of week 39 (month 9).

Work Schedule

Figure 3: Preliminary Work Hourly Schedule for PI and Interns

a: Special Tooling and Test Equipment: $0.00

b: Direct Cost Materials: $0.00

c: Other Direct Costs: Apply for Secret Clearance Est. $3000 to $15000. ITAR: $2250. The work in this project for both Phase I and Phase II can be developed independent of any specific government, military installations, intelligence or peoples specific information. The ideas presented under this proposal is general cyber security methods, although more costly then not having anything or using general network equipment, does not exhaustively cover all possible methods, they apply equally well with commercial and military needs, however is confidential for competitive reasons. There is no Nuclear information, user specific data, intelligence specific data, nor installation specific work. Under the SBIR definitions, installation specific work would be a Phase III task. Due to the extra and very high cost of getting and maintaining clearances, and that this would like to avoided unless there is very strong compelling reason for so. However, this has been budgeted for, and leave this up to the Contracting Officer, which the associated costs must be maintained through Phase II, and have to be carried as part of any Phase III project overheads. This is not the first time that I have had to directly deal with this, so this is why this is stated. Any classified information must be marked as such, and it must be obvious that some information to be received must be apparent before receiving and only on a need to know basis. Declaring all arbitrary information is classified, because the organization handles classified information, is improper. In essence, the type of classified information, handling, must be agreed on before discloser. Handing arbitrary classified information without having an established good reason for "need to know" before being presented with classified information will not be acceptable.

d: Direct Labor: Principle Investigator: $71400 for 1020 hours. Interns: $15000 for 1000 hours.

e: Kick off, Mid Project Trips: Est: Airline: $600, Hotel: $200, Car: $100, for each trip.

f: Cost Sharing: $0.

g: Subcontractors: $0.

h: Consultants: $0.

i: Reports are monthly simple unpolished interim reports and attached items specified in the milestone schedule, adjusted for the possible insertion of Christmas to New Year's holiday week. Monthly billing in accordance to the completed project hours, rates, and overheads, as entered in the cost volume and requirements of monthly cost accounting practices for the contract. The sample interim report model is somewhat incompatible with and would be extra work to resolve with the expected cost accounting methods.

j: Evidence of DD Form 2345, Application Submission:

(Irrelevant info removed)

k: Discretionary Technical Assistance not requested.

Post Proposal Comments

Another one bites the dust.

It is disappointing to waste the time and get no value.

Tried calling and contacting David Climek several times, leaving messages before the submittal period, to get a better understanding of the needs, but this didn't happen, so this is just based it on the solicitation information.

Have asked for debriefing, but was not provided.

Sitting on my computer has no value since it was written for the target needs which probably will not come up again and I can always come up with a new formula for the next proposal. This probably has more value being visible since knowledgeable people will be able to determine that I am knowledgeable and it is much more desirable to work with knowledgeable people than with unknowledgeable people.