Emerging Cybersecurity Technologies

Emerging Cybersecurity Technologies

By: Amy Wees


June 9, 2013


Abstract: Advanced cyberattacks on the public and private sectors at the local, national, and international level have prompted an increase in funding and support for the study of emerging cybersecurity technologies.  The considerations for this paper are to discuss the emerging technologies and strategies that can be integrated across the public and private sector to improve cybersecurity on a local, national, and international level.  New technologies need to dynamically assess networks real-time such as with the use of Remote Agents and Real-time forensic analysis.  These technologies also need to make the attack space less predictable and constantly evolving such as through the use of moving target defense.

Emerging Cybersecurity Technologies

The E-government Act of 2000 was signed by President Bush to move toward a more 24-7 government.  The dream was to eliminate the need to have to stand in line at the DMV for half a day just to pay annual vehicle registration fees (Barker, 2011).   Security was certainly a concern, but it was not at the forefront of the move as government agencies would go through massive changes in equipment, manning, and practices in order to move information and programs online.  Now, over a decade later we still see moves and changes taking place, such as the department of Veterans Affairs recently moving all of their applications, forms and records online.  The expensive cost of getting the government caught up was expected with such an overhaul in the system; however, the U.S. should have spent more on cybersecurity and had to learn this lesson the hard way.  The recent breaches by Anonymous into the FBIs and Department of Homeland Security’s systems were disappointing as these were the two government agencies tasked with taking on cybercrime (Novasti, 2012).  How does the government expect to control the protection of SCADA systems for critical infrastructures as recently proposed by congress if they cannot protect their own assets (Associated Press, 2012)?  Annual Federal Information Security Management Act (FISMA) audits still point to lax practices (US SEC, 2011).

In 2009, President Obama called for a malware-based cyberattack against Iran’s nuclear system computer networks through the use of the Stuxnet worm, which was noted as the first use of cyber as a weapon by the US.  More recently, Iran has experienced more cyberattacks linked to their nuclear systems and operations.  (Airdemon, 2010).

Advanced Persistent Threats (APT) have changed the cybersecurity game as APT attacks can be so sophisticated that many well-known techniques for detection and mitigation may not be effective against them.  An APT that utilizes targeted exploitation code leveraging zero-day vulnerabilities will not be detected by Intrusion Detection Systems and Anti-virus products (Casey, 2011).  The issue is that once the malware is detected, it might not be obvious as to how long the malware was operational.  Further, in the case of an APT, it cannot be determined if the discovered malware is the entirety of the compromise.  The APT might leverage multiple malware tools to maintain access by state-sponsored attackers.  With the aforementioned attacks on critical infrastructures and government systems, as well as an overall increase in complexity of cyberattacks, governments on an International level have considered cybersecurity to be more crucial than ever before.

The considerations for this paper are to discuss the emerging technologies and strategies that can be integrated across public and private sectors to improve cybersecurity on a local, national, and international level.  New technologies need to dynamically assess networks real-time such as with the use of Remote Agents and Real-time forensic analysis.  These technologies also need to make the attack space less predictable and constantly evolving such as through the use of moving target defense.

Moving Target Technologies

Moving Target (MT) technologies aim to constantly change the attack surface of a network, increasing the cost for an attacker and decreasing the predictabilities and vulnerabilities present at any time (NITRD, 2013).  The problem of most networks today in terms of cybersecurity is that they are static and an easy target for an attacker to analyze over time and strategize on the best way to capitalize on vulnerabilities.  Moving target defenses allow the network to consistently change in configurations and environmental values (Grec, 2012).

For example, an organization could change the network IP addresses, operating systems, open ports and protocols, and many other areas of the environment.  This way when an attacker scans the network, the scans are not consistent, and if an attack is launched, chances of successful penetration are severely reduced because of the dynamic changes in the environment.  The MT defense could also react to an attack by reducing the areas of the network known to or accessed by the attacker (Grec, 2012).

The most difficult challenge in using MT is in maintaining an operational network for users during the changes and minimizing costs involved.  The JumpSoft Company has created a subscription based MT defense package called “JumpCenter.”  JumpCenter uses reactive and adaptive automated systems that reduce the attack surface.  The concept behind JumpCenter and MT defenses is to maximize the cost and risk to the attacker.  JumpCenter keeps the network operational by deploying in the application layer. The application layer is more exploitable as it is updated regularly through vendor releases which are exploitable.  JumpSoft adds the incentive that downed applications are a harder impact on the mission because the loss of one application can bring down the business (JumpSoft, 2013).

Government Support of Moving Target Technologies

In January of 2011, the Presidential Council of Advisors on Science and Technology sponsored the work of the Networking and Information Technology Research and Development (NITRD) program.  NITRD has identified emerging technologies such as MT as a Federal Cybersecurity game change research and development project (NITRD, 2013).  The government’s efforts to support NITRD and other research partners in developing MT technologies supports the efforts of the public and private sectors to redefine security in the cyber domain.

For example, in 2011 Professor Scott DeLoach of Kansas State received a $1 million grant from Air Force Office of Scientific Research to study MT (Chabrow, 2012).  Intelligent defenses can change the military reactive position on cyber to an active position, giving them the upper-hand on the adversary.  If military networks can be made unpredictable through the use of MT, the chances of cyber-attack and APTs are lessened.

Remote Agent Technologies

Remote agents, also known as mobile agents, can actively monitor a network’s security.  Active monitoring is necessary because a network that is not updated with the latest patches has shown to be reactive and ineffective against today’s cyber threats.  Additionally, large networks are nearly impossible for a system administrator to successfully monitor as most are made up of multiple nodes, each with constant system variations and users (Tripathi, Ahmed, Pathak, Carney & Dokas, 2002).  Remote agents can conduct centralized testing of network security from a remote client or server without a large manpower or travel cost requirement.  Most importantly, remote agents can run network tests without using unsecure firewall protocols (UMUC, 2012).

Currently, many organizations use network monitoring tools based on SNMP or the occasional execution of scripts built based on network threats which require tedious and complicated updates in order to remain current and valid.  Both SNMP agents and script monitoring procedures offer limited functionality and require specially trained administrators to comb through logs and write updates (Tripathi, Ahmed, Pathak, Carney & Dokas, 2002).

In response to these network monitoring difficulties, a team of students at the University of Minnesota worked under a grant from the National Science Foundation to develop a framework for mobile agent network monitoring using the Ajanta mobile agent system.  The Ajanta mobile agents can remotely filter information and alter system functions.  They use a centralized database to detect and compare system events to ensure policies are enforced.  Using Ajanta, administrators can securely make changes to an agent’s monitoring and filtering rule sets as well as dynamically remove or add new agents to an area of the network based on events triggered.  The model presented contains different types of agents that can monitor, subscribe, audit or inspect.

Perhaps the largest difference between the traditional SNMP monitoring systems and a remote agent system is the ability of a remote agent to relate one event with another in the system and then generate an alert in the log file and raise awareness or threat levels of other agents.  For example, if one agent detects a user logging in with multiple accounts and another auditor agent detects a subsequent remote or console login in the event registry, a password or security compromise can be detected.  In another example of a system reaction based on an agent, an auditor agent is sent to the login event subscriber by a management station.  When a root login event occurs and passes a predefined threshold, an alert is sent back to the manager to raise the alert level on the system (Tripathi, Ahmed, Pathak, Carney & Dokas, 2002).  All of this can be done without a system administrator’s intervention or brain power.

Government Support for Remote Agent Technologies

The government can benefit from the advancement of remote monitoring capabilities as the largest and most complex networks are government owned and operated.  There are many coalition military networks that cross the boundaries of multiple countries.  The monitoring and security of these government defense networks is at the best interest of everyone involved.

The ability to monitor classified defense networks to this level of clarity across International domains could aid in preventing insider leaks such as that of the Bradley Manning leak of military intelligence data to Wikileaks in 2010.  Although Manning was prosecuted, Wikileaks founder Assange has yet to be prosecuted for publishing classified material on the Internet (Wu, 2011).  Until international cyber laws and jurisdiction are better defined, it is in the best interest of all governments to find ways to successfully and dynamically monitor their networks for signs of attack or breach.

Real-Time Forensic Analysis

The use of computer forensic tools in criminal proceedings has proven to be necessary for making a case in today’s digital world.  Also related to network monitoring is real-time forensic analysis which is an investigative approach to maintain situational awareness and continuous observation of the network (UMUC, 2012).  While remote access monitoring actively monitors the network and takes necessary action to correlate threats and increase defenses, real-time forensic analysis allows for an incident to be reproduced and the cause and effect of the event to be analyzed further (UMUC, 2012).

A Network Forensics Analysis Tool (NFAT) prepares the network for forensic analysis and allows for ease of monitoring and convenience in identifying security violations and configuration flaws.  The information found when analyzing network traffic can also contribute background data to other events (Corey, Peterman, Shearin, Greenberg, & Van Bokkelen, 2002).

In addition to monitoring the network, network forensics has many practical uses. For example, health care agencies fall under the Health Insurance Portability and Accountability Act, which requires that information passed between networks be monitored.  Although all of the information provided by a NFAT may not be necessary, it is better to have more information than not enough in legal situations.  NFAT can also allow for recovery of lost data when other back-up methods fail or repeatable analysis of traffic anomalies or system errors (Corey, Peterman, Shearin, Greenberg, & Van Bokkelen, 2002).

Government Support of Real-time Forensic Analysis

Government support of real-time forensic analysis is more obvious in the state and federal criminal justice sectors as forensic analysis is a regular part of legal proceedings and police agencies have expanded to include entire divisions devoted to computer forensics.  The question remains as to whether the government from a local to international level should be concerned with real-time forensic analysis outside of the criminal justice realm?  Forensic analysis makes sense from a network defense perspective as governments can learn more about emerging threats by conducting an in-depth analysis of them.

In 2006, , the National Science Foundation and DARPA funded a project at Columbia University to create an Email Mining Toolkit (EMT) in support of law enforcement and other government research.  The EMT allows for email traffic to be analyzed for outside communications, social interactions, and specific attachments.  According to the report, EMT is in use by many organizations (Stolfo, Creamer, & Hershkop, 2006).

Since 1999 DARPA has funded numerous information assurance experiments using live red, blue, and white teams to simulate attackers, responders, and users during cyber-attack events such as denial of service, malware, and other threats known to be in use by the adversary based on intelligence data (Levin, 2003).  Real-time forensic analysis has allowed for early detection and analysis of the red team efforts by the blue team and has contributed to lessons learned for future responses.


The liability to protect public and private assets on a local, national, and international level cannot fall solely on the government.  Through the cooperative use of government, scientific, and academic programs, emerging technologies can be brought to the forefront to secure cyber assets dynamically and real-time.  Increased and continuing cooperation to fine-tune moving target defenses, remote agent technologies, and real-time forensic analysis will ensure these technologies can be implemented across sectors to protect against emerging threats now and into the future.



Airdemon. (2010). Airdemon. Stuxnet worm. Retrieved from: http://www.airdemon.net/stuxnet.html.

Associated Press. (2012, February 6). Bigger U.S. role against companies’ cyber threats? Retrieved February 25, 2012, from Shreveport Times: http://www.shreveporttimes.com/article/20120206/NEWS03/120206009/Bigger-U-S-role-against-companies-cyberthreats-?odyssey=tab%7Ctopnews%7Ctext%7CFRONTPAGE

Barker, W. C. (2011). E-Government Security Issues and Measures. In H. Bidgoli, Handbook of Information Security (pp. 97-107). Hoboken: John Wiley & Sons.

Casey, E. (2011). Handbook of digital forensics and investigation. Burlington: Academic Press.

Chabrow, E. Government Information Security, (2012). Intelligent defense against intruders. Retrieved from Information Security Media Group, Corp. Website: http://www.govinfosecurity.com/interviews/intelligent-defense-against-intruders-i-1565

Corey, V., Peterman, C., Shearin, S., Greenberg, M. S., & Van Bokkelen, J. (2002). Network forensics analysis. Internet Computing, IEEE6(6), 60-66.

Grec, S. (2012, May 23). Is moving-target defense a security game changer?. Retrieved from https://www.novainfosec.com/2012/05/23/is-moving-target-defense-a-security-game-changer/

JumpSoft. (2013). Cyber moving target defense. Retrieved from http://www.jumpsoft.net/solutions/moving-target-defense/

Levin, D. (2003, April). Lessons learned in using live red teams in IA experiments. In DARPA Information Survivability Conference and Exposition, 2003. Proceedings (Vol. 1, pp. 110-119). IEEE.

NITRD. (2013). Moving target. Retrieved from http://cybersecurity.nitrd.gov/page/moving-target

Stolfo, S. J., Creamer, G., & Hershkop, S. (2006, May). A temporal based forensic analysis of electronic communication. In Proceedings of the 2006 international conference on Digital government research (pp. 23-24). Digital Government Society of North America.

Tripathi, A., Ahmed, T., Pathak, S., Carney, M., & Dokas, P. (2002). Paradigms for mobile agent based active monitoring of network systems. In Network Operations and Management Symposium, 2002. NOMS 2002. 2002 IEEE/IFIP (pp. 65-78). IEEE.

TV-Novasti. (2012, January 20). FBI Website Crippled by Anonymous. Retrieved February 14, 2012, from rt.com: http://rt.com/usa/news/crippled-fbi-megaupload-anonymous-239/

UMUC. (2012). Module 7: The future of cybersecurity technology and policy. Retrieved from the online classroom https://tychousa.umuc.edu

U.S. Securities and Exchange Commission. (2011). 2010 Annual FISMA Executive Summary Report. Washington D.C.: U.S. Securities and Exchange Commission.

Wu, T. (2011, February 4). Drop the Case Against Assange. Retrieved February 27, 2012, from Foreign Policy: http://www.foreignpolicy.com/articles/2011/02/04/drop_the_case_against_assange?page=0,0


, , , ,


Requirements for Business Contingency and Continuity Plans

Requirements for Business Contingency and Continuity Plans

By: Amy Wees

CSEC650, 9045

April 21, 2013


Abstract: Technology plays a vital role in business and threats to technology are constantly evolving.  Businesses must be ready to react to a multitude of situations from a computer virus to a hurricane.  The only way to react successfully is to have a well-written, well-tested contingency and continuity plan.  The steps to planning include identifying threats through Business Impact Analysis (BIA), planning for mitigation of risks or reduction of impact to the business through contingency plan development, and setting up recovery options such as backup sites.  Finally, the plan must remain actionable and up-to-date, and the best way to ensure this is through training personnel and testing the plan on a regular basis.

Requirements for Business Contingency and Continuity Plans

On 17 April, 2013 a giant explosion ripped through the small town of West, Texas after the West Fertilizer Company plant caught fire.  The cause of the fire is still unknown, but many people were killed in an attempt to extinguish the massive blaze, air traffic over the area was halted due to the dangerous chemicals released, and miles of structures surrounding the plant were damaged and evacuated (Eilperin & Fears, 2013).  Many are probably wondering how this happened and if the explosion could have been prevented.  The Environmental Protection Agency (EPA) reported that the fertilizer plant was fined in 2006 for a lacking risk management plan that failed to address safety hazards, employee training, and maintenance procedures.  Furthermore, the owner does not know how he will recover from this disaster (Eilperin & Fears, 2013).  Even if West Fertilizer has insurance to cover the damage of the building and company assets, the costs during the disaster recovery could be far more than West can afford.  Insurance may not cover the medical expenses and deaths of the citizens harmed from the explosion.  How will displaced employees be paid?  Will there be law suits?  Did pertinent company data needed to continue operations or file damage claims get lost in the fire?

Although the fire may not have been preventable, a contingency and continuity plan would help West Fertilizer Company pick up the pieces and continue operations.  West Fertilizer is not alone in their lack of business continuity and disaster recovery planning.  A survey conducted by OpenSky Research in 2006 showed that almost half the businesses in America had no business continuity plan in place.  Of the companies that did have plans, the survey reported that the greatest motivation was the reputation of the business and customer satisfaction, followed by compliance with regulations and past experiences with operational hiccups.  Businesses reported that network operations, malware and data corruption were considered highly threatening along with natural disasters such as fires and blackouts.  Businesses without a plan reported budgetary and resource constraints as primary factors (On Windows, 2006).

It is obvious businesses should be concerned with contingency and continuity planning as it is only a matter of when, not if, something happens that can shut the business down.  Today more than ever, businesses are dependent on technology such as computers, networks, mobile devices and the Internet to run their businesses.  Protecting these assets from cyber security threats and service disruptions is paramount to the bottom line and customer satisfaction.  However, in order to convince management that business continuity planning is a worthwhile investment management must understand the return on their investment and design a plan that weighs the benefits of implementing cyber security, maintenance, and safety protocols against the costs of installing these protocols.  The argument for a plan must help management see a Return on Investment (ROI) so that forecasted returns on money spent can be estimated.  In calculating a ROI, the purchase of the proposed solutions, the cost of employee training, and the cost of paying the staff who will manage the solutions should be included.  This calculation will account for the Total Cost of Ownership (TCO) for the investment.  If costs are not projected accurately, management may reject the proposal or restrict the budget (UMUC, 2011).

This paper will cover the steps to identifying threats and risks to a business, creating and maintaining business contingency and continuity plans, options for recovery of data and business operations, and recommendations to put the plan into practice by conducting business continuity testing for a twenty-four month testing cycle.

Developing Business Contingency Plans

According to the National Institute of Standards and Technology’s (NIST) contingency planning guide for federal information systems, there are seven key steps to developing a plan: 1) Construct the contingency planning policy; 2) Complete a business impact analysis (BIA); 3) Pinpoint preventive measures; 4) Produce contingency approaches; 5) Create an information system contingency plan; 6) Conduct testing, training, and exercises; and 7) Ensure the plan is maintained (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).   Although these steps are written specifically for federal systems, they can be used by any businesses as an overall framework to develop a contingency and continuity plan.  For the purpose of this paper, the seven steps are simplified to three broader areas: 1) Identify threats to the business; 2) Create a plan to alleviate or lessen the impact of the threats; 3) Train personnel and test the plan to ensure accuracy (Cerullo, V., & Cerullo, M. J., 2004).  Authors should keep in mind during plan development that all steps should be documented, actionable, and most importantly, kept up to date (Balaouras, 2009).

Identify Threats to the Business

The first aspects a company must consider when creating contingency and continuity plans are the potential threats to the business.  Some threats will be different depending on the type of business.  For example, an Internet based company may be more concerned with cyber threats such as malware and viruses than a small retail store with little to no web presence.  The retail store, on the other hand, may be more concerned with protecting databases containing customer credit card information.  A defense contractor may see a competitor accessing their intellectual property as the largest threat to the business.  There are also threats that impact every business such as natural disasters, electrical outages, and fires which must be taken into consideration.

Business Impact Analysis

No business is exempt from harm or disruption, however, threats may not always be easy to quantify or identify.  For this reason, a Business Impact Analysis (BIA) can assist in identifying the primary areas affected by a disaster or contingency.  A BIA will distinguish the services and functions most critical to the business’ bottom line, and classify those services and functions according to their effect on the business, level of risk, and likelihood of occurrence.  A recommendation is made on whether to avoid, mitigate, or absorb the risk and methods in which to do so.  Management may also choose to delve further into the identified risks by conducting risk assessments (Cerullo, V., & Cerullo, M. J., 2004).

The first step when conducting a BIA is to identify the primary business processes and supporting systems and the criticality of recovering the associated processes/systems.  The impacts of a system outage are determined to include projected downtime, indicating the maximum downtime that can be tolerated whilst allowing the business to maintain operations.  Possible work-around options should also be listed.  Management and process owners should work together to create a comprehensive list of processes, process descriptions, and systems directly related to these processes (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).

The next step in the BIA is to identify resources required to continue primary processes and any interrelated or dependent systems/assets. Considerations for a thorough resource listing are facilities, staff, hardware, software, electronic files, system elements, and critical records (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).  Some companies may have a configuration manager or other information systems manager that maintains this information.  The constant changes and updates in technology make updating this list on a regular basis relevant.  An example table of a listing of assets follows:

Table 1: Company ABC Critical Resources

System Platform/Version Primary User Critical Process Dependencies
Exchange Server Windows Server 2008 All users internal and external Ensures mail  sent/received Domain Controllers, Active Directory Servers


The final step in BIA is to set priorities for recovery of various systems linked to critical processes identified in step one.  Systems should be recovered in the order of criticality to the business and alternate available options (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).    For example, if the previously mentioned small retail business loses its point of sale (POS) system, cashiers may be able to add up the cost of various items and collect cash from customers for a short period of time, but there will be a maximum amount of time before the business starts to lose customers.  Therefore, the POS may be the most critical asset to recover on their list.  Secondary to the POS may be the inventory system.  Many retailers depend on an automated inventory system to track incoming deliveries, sales, and order new supplies as well as pay suppliers for items received.  These systems are immensely complex and keeping track of inventory on paper and later having to update the recovered system could be costly in man-hours and mistakes.  Third for the retailer may be the store security system.  Although employees could be posted at the door to check receipts against purchases, the amount of theft may increase, and the store could lose valuable evidence related to a crime or incident that occurs.

Create a plan to alleviate or lessen the impact of the threats

Now that the BIA is complete, the business can work on a plan to mitigate the identified risks.  According to Swanson, Bowen, Phillips, Gallup and Lynes (2010), there are three phases to a contingency plan to include supporting documentation such as the BIA, personnel contact information, write-ups of procedures.  The three phases are: Activation and Notification; Recovery; and Reconstitution.

Activation and Notification Phase

When a contingency or event occurs that affects a crucial business process the first step is to put the plan into action and notify personnel responsible for and affected by actions.  This means the plan must identify primary and alternate team members’ roles and responsibilities.  Procedures should include instructions for notifying staff and customers to include contact information and primary duties of personnel internal and external to the organization, locations of alternate work sites, and checklists to follow in order to complete alternate processes while primary means are restored (Cerullo, V., & Cerullo, M. J., 2004).  Procedures should be easy to follow and not overly complicated.

Recovery Phase

After personnel are deployed and active in alternate processes to keep the business afloat, it is time to start recovery of assets affected by the contingency, in the order of priority previously identified during the BIA.  The recovery phase will take up the greatest portion of the contingency plan as there are many options to consider, and the costs are high.  At a minimum, system back-ups should be created and stored at an off-site location or in a cloud environment on a regular basis to minimize system recovery time and allow for reconstitution from another location.  Procedures for system back-up and recovery should be included in the business continuity plan (UMUC, 2011).  The entire environment should be in the backup, to include software, executables, databases, training information, and all systems needed to run the operation as the ability to get back to business is dependent on the quality of the backups (Barry, 2012).

According to a 2002 report by the Disaster Recovery Institute International, costs of downtime were from three to seven percent of the information systems budget.  Some examples of costs of downtime for company website cited by Cerullo and Cerullo (2004) were $8,000 an hour for leading Internet players, $1,400 per minute on average, and a medium-sized business downtime cost of $78,000 per hour on average with an annual cost of over $1 million due to downtime.  Although these costs were estimated for businesses which depend heavily on the Internet, it is pertinent for any business to consider the cost of downtime when looking at options for timely recovery of assets.

Recovery Options

There are three options for recovery sites: hot, cold, or warm.  Businesses should consult with service providers and software vendors when making a decision about what type of site to use, or whether to outsource this service.  A hot site allows for immediate recovery as it should contain all hardware and necessary for operations and can be loaded with current operational and back-up data (Barry, 2012).  The hot site can also serve as the location to store off-site back-ups.  The greatest consideration for a hot site is the considerable cost of creating and maintaining such a site.  A business should consider a hot site when the cost of the loss of systems is greater than the cost of the site (i.e. there is a ROI) and other site options such as cold or warm do not meet the need.

A cold site provides only a facility to operate from without the hardware infrastructure of a hot site.  While the cost of a cold site is lower, hardware will need to be acquired along with backups to return to regular operations.  Even with robust planning and well trained personnel, a cold site could take weeks or longer for recovery.

A warm site is the happy medium between hot and cold.  Warm sites contain some hardware and can contain backup and recovery data, depending on the setup.  Unlike a hot site, warm sites do not have the latest configurations loaded and will require a shorter workload for recovery compared to a cold site.  Outsourcing is also an option as there are multiple companies that offer a wide range of services (Barry, 2012).  Anytime outsourcing is considered, Service Level Agreements (SLA) should be made, adhered to, and updated on a regular basis to cover the changing requirements of the business and the responsibilities of the service provider.  As previously mentioned, the quantity and type of systems needing backups, the location of backups, and the steps to recovery depending on the contingency should be thoroughly documented in the contingency plan.

Reconstitution Phase

During the reconstitution phase, the system should be validated to determine necessary capability and functionality so the business can return to normal operations.  If the original facility is beyond repair, the reconstitution activities can also be helpful in testing and prepping a new location for future use.  At this point, deactivation of the plan can occur, and lessons learned can be documented as well as updates to the plan.

Contingency Testing

The final step in contingency planning is to train personnel to carry out the plan and test the plan for accuracy.  Perhaps the toughest part of contingency planning is not only creating an actionable plan, but finding time during normal operations to test it.  This is where the buy in of management is so critical.  If management does not push the importance of testing, employees will not feel they are stakeholders in the plan or that it is worth their time to test or train for.       There are several options for training personnel and testing the plan from hosting plan reviews or table top exercises all the way to complete backup and recovery testing cycles, or a combination thereof.  Individual checklists included in the contingency plan could also be given out to key personnel or individual work centers to run through during duty hours and check for accuracy and updates.  It can be difficult for system administrators to test system checklists as live systems are critical to operations and cannot be taken down for such purposes.  This is where virtual machines can be helpful in that copies of servers can be created from virtual templates using very little system resources allowing for testing and training on systems and having no effect on current operations.

Costs for training personnel and testing the plan should be considered and included in the contingency planning and continuity of operations budget.  Potential costs include training and testing man-hours not billable to direct operating costs, purchases of additional technology (such as virtual machines and servers) utilized for testing, and cost of other additional resources necessary for testing and training such as office supplies, use of external facilities, or outsourced vendor training.

24-Month Cycle Business Continuity Testing Plan

Below is a sample testing plan based on a 24 month cycle.

Months 1-2: Plan Accuracy 

Plan appendixes are distributed to key personnel in work centers where they will run through their checklists and action items and check for accuracy.  Key personnel will train alternates on procedures.  Alternates will run checklists to ensure they are repeatable.

Months 3-4: Notification Procedures

Management will choose a table top scenario based on the probability of various threats identified in the BIA.  Work centers will practice notification procedures by running through call lists based on the scenario.  On duty and off duty emergency contact information will be tested and updated as necessary.      

Months 5-6: Activation Procedures

Management will choose another scenario based on the BIA and make note of systems affected.  Key personnel will be notified to test their activation procedures based on that scenario.  Operations personnel will conduct business processes using alternate procedures, systems administrators will recover backups to alternate hardware (or virtual machines) and operations personnel will attempt processes on recovered systems.  This practice will identify lacking procedures in the checklists and data that may not have been backed up or recoverable as well as necessary system configurations after recovery.  Checklists and procedures will be updated based on this exercise.

Months 7-8: Reconstitution Testing

Reconstitution is the process of ensuring that a system is fully operational and configured for use.  In order to validate a system, users must identify the data needed on the system and procedures for working with that data.  This is not covered in the BIA but should be covered in a continuity book for the duty position.  Continuity books are created to ensure that someone with limited knowledge of a position can perform basic tasks when key personnel are not available.  During this testing phase, personnel will be given an alternate duty position for a specified period of time and attempt to perform routine tasks using the continuity book as their guide.  Often in an emergency situation, the person who knows an essential business process best may not be available and it will be paramount for other personnel to be able to fill-in where necessary.

Months 9-10: Updating Continuity Procedures

Based on the last test of continuity books, personnel will utilize months 9-10 to update their continuity documentation and prepare for a disaster preparedness drill in months 11-12.

Months 11-12: Contingency Recovery Drill

In this test, all phases will be tested.  Management will choose a scenario from the BIA that would require a move to an alternate facility which is a hot site and ultimately, employees will reconstitute operations at the new site.  First, notification procedures will be tested; employees will be prepared for this ahead of time to let them know this is a test of the system.  External agencies and customers will also be notified ahead of time that the agency is running this test so as not to affect operations.  Employees will start their checklists using alternate procedures for regular operations depending on the scenario until information technology (IT) personnel notify them to move to the hot site.  Employees will then move to the hot site and continue operations, identify shortfalls, and update the plan based on the lessons learned during this testing.  This type of drill is not recommended for businesses without a hot site as there would be too much risk to operations.  However, a table top contingency drill could be similar to this to test employees’ awareness of what to do in various scenarios would be helpful.

Months 13-24: Repeat months 1-12

During the second year, the business will repeat the testing done in the first year and adjust timelines and procedures as necessary to fine-tune the process.  Different scenarios can be given, or the same scenarios if management feels employees need more practice.  Repetition allows employees to gain confidence in plan execution and creates a mindset of contingency planning as part of day-to-day operations.


Technology plays a vital role in business and threats to technology are constantly evolving.  Businesses must be ready to react to a multitude of situations from a computer virus to a hurricane.  The only way to react successfully is to have a well-written, well-tested contingency and continuity plan.  The steps to planning include identifying threats through BIA, planning for mitigation of risks or reduction of impact to the business through contingency plan development, and setting up recovery options such as backup sites.  Finally, the plan must remain actionable and up-to-date, and the best way to ensure this is through training personnel and testing the plan on a regular basis.





Baker, N. (2012). Enterprisewide Business Continuity. (Cover story). Internal Auditor, 69(3), 36-40.

Barry, C. (2012). Backup plans. Multichannel Merchant8(5), 36-38.

Balaouras, S. (2009). Businesses take BC planning more seriously. (2009). For Security & Risk Professionals.

Cerullo, V., & Cerullo, M. J. (2004). Business continuity planning: a comprehensive approach. Information Systems Management21(3), 70-78.

Eilperin, J., & Fears, D. (2013, April 18). Fertilizer facility explosion injures at least 160 in central Texas; 5 to 15 feared dead. The Washington Post. Retrieved from http://www.washingtonpost.com/world/national-security/fertilizer-plant-explosion-leaves-more-than-100-wounded-in-central-texas/2013/04/18/14fa7cb2-a7ef-11e2-a8e2-5b98cb59187f_story_2.html

Geer, D. (2012). Are You Really Ready for Disaster? Three exercises for testing your business continuity plans. CSO Magazine11(8), 16-18.

Karim, A. (2011). Business Disaster Preparedness: An Empirical Study for measuring the Factors of Business Continuity to face Business Disaster. International Journal of Business & Social Science2(18), 183-192.

Kirvan, P. (2009, July). Using a business impact analysis (BIA) template: A free BIA template and guide. TechTarget: SearchDisasterRecovery. Retrieved November 4, 2011, from http://searchdisasterrecovery.techtarget.com/feature/Using-a-business-impact-analysis-BIA-template-A-free-BIA-template-and-guide.

Lam, W. (2002). Ensuring business continuity. IT professional4(3), 19-25.

On Windows. (2006, March 23). Half of us businesses lack continuity plan. On Windows Magazine, Retrieved from http://www.onwindows.com/Articles/Half-of-US-businesses-lack-continuity-plan/2063/Default.aspx

Rawlings, P. (2013). SEC’s Aguilar Pushes Continuity Plan Testing. Compliance Reporter, 25.

Rucks, A., Ginter, P., Duncan, W., & Lesinger, C. (2011). A Continuity of Operations Planning Template: Translating Public Policy into an Effective Plan. Journal of Homeland Security and Emergency Management8(1).

Slater, D. (2012, December 13). Business continuity and disaster recovery planning: The basics. Retrieved from http://www.csoonline.com/article/204450/business-continuity-and-disaster-recovery-planning-the-basics?page=1

Swanson, M., Bowen, P., Phillips, A., Gallup, D., & Lynes, D. (2010, November 11). Retrieved from website: http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf

Totty, P. (2009). Business Continuity: Test and Verify. Credit Union Magazine75(12), 46.

UMUC. (2011). Module 11: Service Restoration and Business Continuity.  Retrieved from http://tychousa.umuc.edu/

Whitworth, P. M. (2006). Continuity of Operations Plans: Maintaining Essential Agency Functions When Disaster Strikes. Journal of Park & Recreation Administration24(4), 40-63.

Wold, G. H. (2006). Disaster recovery planning process. Disaster Recovery Journal5(1).

, ,

1 Comment

Digital Forensics Investigations: Data Sources and Events based Analysis


Digital Forensics Investigations: Data Sources and Events based Analysis

Amy Wees

CSEC650, 9045

March 15, 2013


Data sources used to gain evidence in digital forensics investigations differ significantly depending on the case.  This paper prioritizes data sources used to gain evidence for network intrusions, malware installations, and insider file deletions.  These three events drive the prioritization of the types of data that are analyzed, what information is desired, and the usefulness of that data in regards to the event.  Primary focus is information garnered from sources such as user account audits, live data systems, Intrusion Detection Systems, Internet Service Provider’s records, virtual machines, hard drives and network drives.

Digital Forensics Investigations: Data Sources and Events based Analysis


Digital forensics investigations deal with a multitude of data sources used to preserve and capture evidence to be used in a legal platform.  The various events or crime scenes investigators encounter drive the prioritization of the types of data that are analyzed, what information is desired, and the usefulness of that data in regards to the event.  The goal of this paper is discuss three specific events; network intrusion, malware installation, and insider file deletion and then analyze and prioritize data sources that can be useful in investigating each case.

Network Intrusion

A network intrusion occurs when a computer network is accessed by an unauthorized party.  Network intrusions can have significant impacts on the victim organization as files can be stolen, altered or deleted, and hardware or software can be damaged or destroyed.  In a case study by Casey (2005) published in the Digital Investigation Journal, a network intrusion investigation is described.  The scenario presented in this case study will be the basis for analyzing data sources most crucial to the contribution of evidence.

In March of 2000 at a medical research facility, a system administrator completing routine maintenance tasks noticed an unfamiliar account with the name “omnipotent” on a server which he was solely responsible for.  The administrator immediately deleted the account and notified Information Security personnel (Casey, 2005).  The incident caused several laboratories to be shut down for days, preventing ongoing medical research resulting in severe financial losses for the company.  Luckily, after a thorough investigation the perpetrator was caught and charged in 2004 (Casey, 2005).

Prioritized data sources

Account Auditing

Many steps were taken by forensic investigators in this scenario to preserve evidence, reconstruct the crime, track the intruder and examine the data.  The aim of this paper is to prioritize the data sources used in this scenario from the most to least useful in terms of a network intrusion.  The first data source used was review of user accounts during routine maintenance.  Without routine review of user accounts and permissions, the account the intruder was using to access the server may never have been discovered (Casey, 2005).  This scenario exemplifies why account and role auditing is so vital.

The Federal Agency information technology (IT) Handbook on technical controls published by the National Institute of Standards and Technology (NIST) (2002) recommends access to any asset be controlled by combining technical and administrative controls to make sure that only approved users are given an applicable level of access and that they can be held accountable for their use of information systems.  Access should be monitored by positively identifying and authenticating users.  The handbook also makes clear that a weakness in policy at one node in the network can put other nodes at risk.  Therefore, it is essential that all agencies have a uniform access control policy (NIST, 2002).  Account and role maintenance should require users to authenticate with a strong password and to change passwords on a consistent basis.  Administrators must also ensure user names belong to currently identified users and that users are deleted and updated often.  Auditing should also be established on user accounts to ensure that accounts which haven’t been active for a period of time are deleted.  Incorrect logon attempts should also be limited and locked after multiple erroneous password entries (NIST, 2002).

If some of these policies were in place in the above scenario, the “omnipotent” user may have been discovered earlier or prevented from accessing the system in the first place. Role auditing can be challenging when dealing with multiple types of operating systems and role based user accounts.  Each operating system has different account systems and auditing procedures.  It may also be difficult for organizations with a large pool of users to weed out inactive or incorrect accounts.  Organizations may also outsource their IT services, making user role auditing a difficult process as the administrator behind the phone may not be able to positively identify a user or give the correct permissions to an account.  User account and role auditing is the most useful of the four data sources in this network intrusion scenario as it is easy to accomplish and if maintained properly can aid an administrator in identifying if an account has been misused, locked out, or doesn’t belong.

Live System Data

Next, to gain more evidence, investigators used the Encase program to capture live data from the systems and kept an audit log by utilizing the script command.  During this capture, they found that the intruder was accessing the network through a dial-up connection in Texas, had installed a sniffer, and also substituted the original Telnet for a version with a backdoor vulnerability allowing remote access.  The sniffer log contained records of the backdoor intrusions as well as root passwords for multiple computers.  They also discovered the hacker had created his own telnet password, “open_sesame” to access and compromise additional computers on the network (Casey, 2005).

The primary reason an incident handler would use a tool such as Encase to capture live data is to determine if an event has occurred and if a full investigation needs to be accomplished on a system.  In the above scenario, the incident handlers used Encase to capture live, volatile data and were able to determine if a network connection had been made from the intruder to that computer.  They were also able to view live system logs to see what passwords had been used to access the system.  Capturing data from live systems is also called “live forensics”.  Live forensics capture system information or volatile data that disappears after the device is powered down.  The challenges of live forensics are in preserving the state of the system and ensuring the data captured is forensically sound (McDougal, 2006).  The best way to do this is by using a forensic toolkit such as Encase which keep the process as automated as possible.   In the above scenario, a new employee who wasn’t competent in live forensic processes missed files on several machines and didn’t keep an audit log which allowed incident handlers to determine which data came from which computer, leaving all evidence captured to be inadmissible in the case.  Live system data is the second most useful data source because live data provides the most promising evidence to discover what files and systems have been compromised and gives evidence in real time as to how the intruder is accessing the system (Casey, 2005).

Intrusion Detection System

The third data source used was the intrusion detection system.  After the subnet used by the intruder in Texas was discovered from the live system analysis, investigators were able to reconfigure the intrusion detection system to monitor network traffic for these connections.  After the reconfiguration, investigators were able to monitor network traffic and watch as the hacker accessed more machines not previously found to be targeted using the telnet backdoor password.  Critical systems were able to be secured and processed for evidence as a result of these findings (Casey, 2005).

Intrusion Detection Systems (IDS) are indispensable in detecting network intrusions because they can be programmed to automatically alert administrators when abnormal network traffic occurs.  Hill & O’Boyle (2000) compare IDS to a burglar alarm as although one observes cyberspace and the other physical space–both provide alerts when the unforeseen occurs. The main difference is that in cyberspace unauthorized actions are harder to detect than those in a physical space.  The challenge for an IDS operator is sorting out the harmless activity from the anomalous and programming the IDS to capture future anomalies (Hill & O’Boyle, 2000).

Automated IDS and accompanying forensic procedures utilize “signature matching,” which searches network connections and activity alerting on specific incident patterns and means of attack. Unfortunately, automatic signature matching is not a definite process and is determinant on many factors.  Signatures often create false alarms because they are too generalized, such as alarms for port scanning.  Attack profiles can also vary considerably from well-known malware insertion attempts to customized programs created to target specific systems and not yet known to the public. The latter customized attacks are not caught by IDS because signatures do not yet exist for this situation and are not made available until after the attack.  The downside of utilizing IDS in this situation is that, without the latest updates, IDS is not nearly as effective (Hill & O’Boyle, 2000).

Forensics investigators can start by sorting through automatically generated IDS alarms and then use clues from the alarms to further analyze less developed system logs and information. In order to isolate evidence of an intrusion, the investigator needs vast knowledge of various operating systems and hacking techniques and must comprehend logs of diagnostic tools and systems. IDS is the third most useful source of evidence because after discovering the intrusion, the IDS is able to capture details about the rogue connection and help prevent or block further connections as well as pinpoint where the traffic is aimed.

Internet Service Provider Records

The last source of evidence was the Internet Service Provider (ISP) used by the hacker.  Investigators were able to call the ISP and request logs and records connected with the case be preserved (Casey, 2005).  According to Daniel (2012), after a subpoena if one is requested, there is some basic information that can be gleaned from ISP records, dependent on what the ISP collects on its account holders.  Some of the available information could be names, e-mail addresses and mailing addresses for paid account holders, to include payment information such as credit cards or bank account information, which may lead to other evidence.  IP addresses assigned to the account during the dates and times requested as well as associated activity may be available along with a MAC address for the computer making the connection (Daniel, 2012).  The challenge of collecting information from an ISP is that a subpoena may be required, information may not always be reliable, and different ISPs carry varying amounts of information regarding their customers.  For these reasons, ISP records are the least useful data source in this scenario.

Malware Installation

Malware is malicious software that can emerge from scripts or codes hidden in websites or content, embedded in web advertisements or buried in different types of software programs.  Malware can infect a system when a user visits a website, opens an email, or clicks on a hyperlink among many other normal activities.  The types of malware that exist are viruses, rootkits, spyware, and worms.  Each type infects a system in a different way (Goodrich, 2012).  Malware is dangerous because it exists in a multitude of formats, is easy to create and hard to track.  Although there are many different anti-virus, anti-spyware and anti-malware applications to detect and remove malware from a system, the programs are only as effective as the updates provided to recognize the attack.

One common scenario was presented by Martin Overton at the 2008 Virus Bulletin Conference: a user calls the helpdesk with a complaint that their computer is suddenly unusually slow to respond, and they aren’t able to bring up task manager to figure out what the problem might be.  How does the helpdesk know if the problems are caused by malware or if the user has done something wrong?  Alas, the anti-virus program shows no signs of an infection, is recently updated, and has been active throughout the reported timeframe.  What should the administrator do?  How can the machine be investigated further to determine the presence of malware (Overton, 2008)? Overton presents an all too familiar scenario which will be used as a basis for analysis of a malware installation.

Prioritized data sources

Live System Data

Similar to the network intrusion scenario and most useful to the malware installation scenario is the collection of live system data.  Overton (2008) recommends that after a suspect system is identified, all traffic coming to and leaving the system should be captured to include searching for hidden files inserted by malware, most likely located in alternate data streams.  Nmap, Nessus, and various other vulnerability assessment tools can be used on the suspect workstation as well as the network to analyze anomalies (Overton, 2008).   Programs such as Helix3 and Windows Forensics Tool-chest can examine volatile system data for valuable clues such as network routing tables, system drivers and applications, and analysis of running processes and services, all without alerting the attacker an investigation is taking place (Aquilina, Malin & Casey, 2010).  The challenge in determining if malware is installed on a live system is that tools may not be available to conduct a thorough analysis and anti-malware tools may give a high amount of false-positives or the malware may be so stealthy it goes unnoticed until the damage caused is irreparable.

Intrusion Detection System

An IDS is the second most useful tool for malware installation.  After the initial investigation is complete and the analyst has deemed an infection is probable but wasn’t caught by the anti-malware program; the workstation should be removed from the network to prevent the spread of malware to other systems and the ports and protocols collected should be analyzed further using IDS or other network analysis tools such as Wireshark or Snort (Overton, 2008).  The second step in discovering malware is analysis and IDS can assist in malware detection by creation of signatures created based on the information captured from the previous inspection.  These signatures can then be implemented to block future attacks until anti-virus programs are updated.  There are many reasons IDS can be used to both detect and prevent malware that comes in through the network boundary.  In a second conference presentation written by Overton (2005), he explains that malware is quickly evolving and requires faster detection methods.  IDS can also be part of a defense-in-depth strategy that uses IDS in combination with anti-malware scanning tools to provide improved protection.  Finally, using an IDS for malware detection uses the IP from the source and this data can quickly eliminate the spread of threats across the network (Overton, 2005).  The challenges with an IDS is that signatures can be difficult to create and maintain, they require training to understand and utilize for malware detection, and the amount of information left for an investigator to comb through may be overwhelming.

Virtual Machine

            The third most useful data source for malware installation and analysis is the use of a virtual machine.  As suggested by Overton (2008), a private or closed network or lab environment should be used to analyze malware if available.  Virtual machines can make this possible by allowing for multiple systems to be running from the same hardware allowing the observer to watch how a string of malware behaves inside various systems, also called “behavioral malware analysis” (Zeltser, 2007).  Virtual machines can take on the forms of many different types of systems or platforms, without requiring an entire lab of expensive equipment.  Virtual machine program vendors such as VMWare allow the administrator to take multiple snapshots of the systems settings, performance and volatile data throughout the observation process so that if further study is needed it is possible to go back to a previous snapshot.  VMWare can also create a simulated network so that it is not necessary to connect the infected machine to a live network allowing for analysis in a protected environment while still having the ability to analyze network traffic (Zeltser, 2007).  In a virtual environment, threats can be detected and mitigations tested and proven.

The use of virtual machines (VM) also presents challenges in that a virtual environment cannot always fake the characteristics of an operating system on a physical platform, allowing attackers to possibly discover the VM.  In certain cases, a virtual environment may not meet the need because of the type of system being imitated or the response of the malware, requiring the analyst to utilize a complicated and expensive laboratory environment (Brand, Valli & Woodward, 2010).

Insider File Deletion

Inside threats to an organization can come from employees, contractors, vendors, visitors, and   anyone else with reasonable access to company assets.  What makes insiders threatening is their familiarity with systems, databases, and processes as well as their permitted position inside of security barriers (Cappelli, Keeney, Kowalski, Moore & Randazzo, 2005).  Accidentally or not, files that are crucial to an organization can be deleted and information security personnel need to have the skills and tools to be able to recover data for various reasons.

In a scenario from March 2002 offered by Cappelli, Keeney, Kowalski, Moore & Randazzo (2005), a resentful employee of a finance company planted a logic bomb that erased 10 billion files prior to quitting over an annual bonus disagreement.  A logic bomb can be inserted into a computer system and set to activate at a later time or upon a specified action.  The deleted files in this case impacted servers across the country and cost over $3 million dollars in damages and file reconstruction.  Were the company able to recover the deleted files, what data sources would be useful to the investigation?  This query will be the basis for this analysis.

Prioritized data sources

Hard Drive (Non-volatile system data)

            In the previous subjects of network intrusion and malware installation, live system data has been the highest priority data source because of the indications given by volatile data.  However in the case of insider file deletion; the first goal is to make a forensic copy of the hard drive in an attempt to recover data that hasn’t been overwritten on the hard drive.  Even the least savvy computer user will know to delete files from the recycle bin so volatile data is not as much of a concern as the non-volatile data that resides in the master file table which can be recovered with the assistance of various third-party applications.

For example, when a file is removed from the recycle bin in Windows, only the file information such as the path, sector, and additional identifying information such as create and modify dates have been erased.  Windows is simply notified by the file system that new space is available for use where the deleted file used to be and any newly saved files will overwrite information deleted long ago.  In spite of this if a newly saved file is not as large as or does not take up all of the space of a previously deleted file (hence the old information is not completely overwritten), the file is still recoverable with the use of forensic software.  If only a short time has passed since the file deletion, tools such as WinUndelete for windows can easily recover the file (Landry & Nabity, n.d.).  The challenges of recovery from a computer hard drive are that after a period of time, the desired files may be entirely overwritten.  A smart criminal may also use freeware such as “Eraser” to overwrite erased data immediately making it unrecoverable by forensic toolkits (Capshaw, 2011).

Network Storage

            Of equal worth to a forensic investigation in the insider file deletion scenario, is the recovery of deleted files from a network storage device.  In most cases, files of considerable importance to an organization must be shared with a group of people and are therefore located on a network storage device such as a Network Attached Storage (NAS), Windows File Server, or a Storage Area Network (SAN).  After a file is deleted from a folder on the network, the easiest way to recover it is through previous versions.  Microsoft TechNet (2005) explains that on most Windows Server versions, there is a previous versions tab that when selected, will bring up any files that have been deleted from that location and can then be copied and pasted to the newly desired location.  Similarly, NAS and SAN file systems offer recovery of recent snapshot from the administrative user interfaces.  The challenge with recovering files from a network storage device is that a copy of a RAID or other file system disk may be large and difficult to analyze.  Insiders with administrative access may also know how to permanently delete files or destroy network storage volumes.


Digital forensics investigations deal with a multitude of data sources.  This paper has covered three events which drive the prioritization of the types of data that are analyzed, what information is desired, and the usefulness of that data in regards to the event.  Important data sources for network intrusions are account audits, live system data, Intrusion Detection Systems, and Internet Service Provider records.  Malware installation requires examination of live system data, Intrusion Detection Systems and Virtual Machines.  Recovery of deleted files relies mostly on hard drives and non-volatile data.  Each data source has tools, advantages, and challenges for an investigator to consider dependent on the situation at hand.




Aquilina, J. M., Malin, C. H., & Casey, E. (2010). Malware forensic field guide for windows systems, digital forensics field guides. New York: Syngress. Retrieved from http://www.malwarefieldguide.com/Chapter1.html

Brand, M., Valli, C., & Woodward, A. (2010, November). Malware forensics: Discovery of the intent of deception. Originally published in the proceedings 8th Australian digital forensics conference, Perth, Australia. Retrieved from http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=1074&context=adf

Cappelli, D., Keeney, M., Kowalski, E., Moore, A., & Randazzo, M. (2005). Insider threat study: Illicit cyber activity in the banking and finance sector. (Technical Report, Carnegie Mellon Software Engineering Institute). Retrieved from http://www.dtic.mil/dtic/tr/fulltext/u2/a441249.pdf

Capshaw, J. (2011, April 01). Computer forensics: Why your erased data is at risk. Retrieved from http://www.webmasterview.com/2011/04/computer-forensics-data-risk/

Casey, E. (2005). Case study: Network intrusion investigation e lessons in forensic preparation. Digital Investigation, 2005(2), 254-260. Doi: 10.1016. Retrieved from https://wiki.engr.illinois.edu/download/attachments/203948055/1-s2-1.0-S1742287605000940-main.pdf?version=1&modificationDate=1351890428000

Daniel, L. (2012). Digital Forensics for Legal Professionals. Waltham, MA: Elsevier Inc. Retrieved from http://my.safaribooksonline.com/book/-/9781597496438/22-discovery-of-internet-service-provider-records/223_what_to_expect_from_an_int

Goodrich, R. (2012, Nov 21). What is Malware? How malicious software can affect your computer. Retrieved from http://www.technewsdaily.com/15612-what-is-malware.html

Hill, B., & O’Boyle, T. (2000, August). (2000, August). Cyber Detectives employ Intrusion Detection Systems and Forensics. Retrieved from http://www.mitre.org/news/the_edge/february_01/oboyle.html

Landry, B., & Nabity, P. (n.d.). Recovering deleted and wiped files: A digital forensic comparison of FAT32 and NTFS file systems using evidence eliminator. Retrieved from http://www.academia.edu/1342298/Recovering_Deleted_and_Wiped_Files_A_Digital_Forensic_Comparison_of_FAT32_and_NTFS_File_Systems_using_Evidence_Eliminator

McDougal, M. (2006). Live forensics on a windows system: Using windows forensic toolchest. Retrieved from http://www.foolmoon.net/downloads/Live_Forensics_Using_WFT.pdf

Microsoft. (2005, January 21). Recover a file that was accidentally deleted. Retrieved from http://technet.microsoft.com/en-us/library/cc787329(v=ws.10).aspx

National Institute of Standards and Technology. (2002). Agency IT Security Handbook: Technical controls. In Federal Agency Security Practices (2 Ed.). Retrieved from http://csrc.nist.gov/groups/SMA/fasp/documents/policy_procedure/technical-controls-policy.doc

Overton, M. (2005, May). Anti-malware tools: Intrusion Detection Systems. Paper presented at conference 2005 EICAR Conference, Malta. Retrieved from http://momusings.com/papers/EICAR2005-IDS-Malware-v.1.0.2.pdf

Overton, M. (2008, October). Malware forensics: Detecting the unknown . Conference Presentation Paper 2008 Virus Bulletin Conference, Ottawa, Canada. Retrieved from http://momusings.com/papers/VB2008-Malware-Forensics-1.01.pdf

Zeltser, L. (2007, May 1). Using VMware for malware analysis. Retrieved from http://zeltser.com/vmware-malware-analysis/



Leave a comment

Trusted Platform Module

 Trusted Platform Module

Team Project by: Philip Roman, Vouthanack Sovann, Kenneth Triplin, David Um, Michael Violante, Amy Wees

CSEC640, 9046

November 25, 2012



The focus of this paper is to discuss both current issues and recent developments in Trusted Platform Module (TPM) security as well as its strengths and weaknesses.  The main reasoning behind TPM security devices was to establish a means of trusted computing.  These devices utilize unique hardcoded keys to perform software authentication, encryption, and decryption among other things.  This paper will discuss what TPM is comprised of, its strengths and weaknesses, possible vulnerabilities, TPM attestation, and potential uses for TPM.  The intended audience for this paper are those who are technically savvy with an in depth knowledge of security concepts and a general understanding of TPM, encryption and other cybersecurity concepts.


As trusted computing becomes more prevalent and necessary, TPM enhances the security of information systems acting as a trusted entity that can be utilized for securing storage and cryptographic key generation among other capabilities (Aaraj, Raghunathan, & Jha, 2008).  TPM was created by the Trusted Computing Group (TCG), an initiative between some of the most prevalent information technology corporations in the world, to establish a better means of trusted computing.  The utilization of a TPM chip helps to ensure the security of an information system as it allows the implementation of security from a hardware perspective.  Although TPM technologies and the software that utilizes them inherently contain flaws and vulnerabilities, the primary hesitation numerous groups have had with implementing TPM is that it is too inefficient.  Some organizations, especially those that demand extensive end-user monitoring and activity logging, are hesitant to implement TPM as it will limit their ability to scrutinize activity or capture activity logs.

TPM is composed of three basic elements; root of trust for measurement, the root of trust for reporting, and root of trust for storage.  These three elements refer to platform integrity measurements, the storage of these measurements, and the reporting of values stored (Aaraj, Raghunathan, & Jha, 2008).  A secondary use of TPM is the generation of cryptographic keys.  Figure 1 details the make-up of TPM:

Figure 1: Make-Up of TPM (Aaraj, Raghunathan, & Jha, 2008)


Benefits and Strengths of TPM

            TPM offers three primary benefits: storage for secure content, secure platform specific criteria reporting, and hardware authentication (Ryan, 2009).  By using a TPM to secure content, the user has the benefit of storing files securely without relying on a software based operating system.  In the instance of mobile devices, users can encrypt entire hard drives using TPM thus reducing the risk of loss of sensitive information.  Authors Van Dijk, Sarmenta, Rhodes, and Devadas (2007) explain how it is possible for many users to connect to an untrusted storage device over an untrusted network and protect the information being shared without reliance on a secured common operating system using TPM 1.2 technology.  For example, many users today rely on secure storage servers hosted online so that they can share data between multiple devices and access information from anywhere.  This requires the user to trust that their hosted information is secured with administrators, server and client OS and additional security software, and the server BIOS and CPU.  Van Dijk et al. (2007) argue that in utilizing a TPM chip in the user’s machine and the online server; even peer-to-peer transactions can be completely secured using one-time certificates.  A one-time certificate uses the TPM to verify the identity of the sender and receiver of the information which can be verified at any time after the transaction has occurred.  The verifier also requires no contact with the issuer and need only rely on the TPM chip in the originating machine. Significantly, one-time certificates cannot be counterfeit or falsified even from a hacked machine which could open up their utilization to multiple offline applications (Van Dijk et al., 2007).

A TPM can also collect, secure, and report information about the state of a computer’s components such as the BIOS, boot records and sectors, applications and OS.  The TPM can do this using platform configuration registers (PCRs) to securely pass information measured by one component about the state of another component.  Upon boot-up, component X measures the status of component Y and inserts the data into a PCR where it is secured and able to provide the status of the platform from that point forward (Ryan, 2009).  In order to pass this information known as platform configuration to another entity, the TPM encrypts the configuration using a secured signature key which can only be decrypted by a remote TPM key with the required authentication information (Sadeghi & Stüble, 2004).

Another benefit of TPM is hardware or platform authentication whilst preserving the privacy of the user through Direct Anonymous Attestation (DAA).  Chen, Brickell, and Camenisch (2004) describe DAA as a secure group signature that cannot be tampered with after it is invoked.  Additionally, each user can choose whether or not their signature can be attributed to another allowing anonymity.  DAA requires four participants; the host or platform and its TPM, the issuer and verifier (Ryan, 2009).  The TPM generates a secret message, receives a signature for it from the issuer and then uses that signature to prove to the verifier that the attestation was received anonymously (Brickell, Camenisch, & Chen, 2004).  DAA also allows for detection of rogue or published keys because a verifier can confirm that a signature has already been used.  Brickell et al. (2004) proved that DAA is secure using the random Oracle model assuming strong RSA and Diffie-Hellman are applied.

Weaknesses of TPM

A known weakness of TPM is the cold boot attack which overcomes the disk encryption thought to protect the contents of a hard drive from physical access.  Halderman et al. (2009) report they were able to overcome the disk encryption of BitLocker, TrueCrypt, and FileVault with cold boot attacks.  The idea behind an encrypted hard drive is that if a laptop is stolen in a locked state and the thief powers down the computer, everything in memory would be erased and encryption keys lost.  Unfortunately, this is not always the case.  Bitlocker is provided by Microsoft for use with TPM and encrypts parts of the disk on demand.  Halderman et al. (2009) created an automated tool called “BitUnlocker” using an external USB hard disk with a specialized driver allowing BitLocker volumes to be remounted on a Linux OS.  The tool runs a key finder and tries each key until one works and after breaking into the system allows the disk volume to be searched using the other OS.  When rebooting a Windows laptop and connecting to the external drive, the tool successfully recovered the encryption keys and allowed the hackers to decrypt the disk in moments (Halderman et al., 2009).  An obvious mitigation technique would be to prevent physical access to the system using a defense-in-depth strategy or employing additional encryption methods on mobile technology.

Social engineering may also be a vulnerability to TPM as most Computer manufacturers such as Dell keep a master password list for each service tag number.  Information Assurance professional Morrison (2010) found he could simply call Dell and provide the service tag number and they could provide the key to unlock his BIOS.  Without requiring personally identifying information or passwords, anyone could call a company and receive the same master password.  With that same phone call Morrison also found out makers of external hard drives also record the chips in the devices they sell so the same master password can be generated.  Morrison (2010) recommends using an aftermarket external hard drive that may be less likely to provide such great customer service to protect sensitive mobile data.


TPM Discussion


Ideal Application of TPM


When one looks at applications that function well using TPM a good example is the Random Oracle Model noted by Gunupudi and Tate (2007). As noted by Gunupudi and Tate (2007, p.1) the Random Oracle Model “is an idealized theoretical model that has been successfully used for designing many cryptographic algorithms and protocols”.  The idea with this model is to prove a cryptographic scheme is secure; where all parties, including the adversary, have access to a random function, called a random oracle, and then replace the random oracle with a “good” cryptographic hash function in the standard model. As a method to prove that the random oracle model is secure, evidence was provided that generated a scheme in the form of a standard model – thereby providing an instantiation of the oracle with a hash function that demonstrates that it is secure.

Another example of an ideal application that supports TPM and the TPM functionality is Cloud Computing. This was championed by Lui et al. (2010) where they proposed an idea, and then formally implemented it as virtual TPMs in a cloud-based architecture. The premise behind this development was to provide a technology with the TPM functionality for applications that did not have the use and capability of the TPM chip as part of their platform.

According to Lui et al. (2010) the TPM functionality in cloud computing was noted to be easily accessible to various applications in diverse languages due to cloud computing’s ability to distribute services to basic protocols. TPM and its hardware chip commonly perform well in most applications.  Trusted Platforms add value to applications and services such as electronic money systems, email, workstation sharing, platform management software, single sign-on, virtual private networks, Web access, and digital content delivery (Pearson, 2005).


Law Enforcement Application

When it comes to Law Enforcement and the Trusted Platform Module (TPM), officials see benefits and drawbacks with certain features of TPM.  From a digital forensics point of view, Burmester and Mulholland (2006) note that the advent of trusted computing has strong points.  In fact, the Trusted Computer (TC) enabled features criticized by the naysayers may become a boon for cyber-investigators.  On the other hand, if file-encryption becomes the norm, trusted computing may turn out to be law enforcement’s worst nightmare.


TPM Keys

When trying to determine the effects and the types of TPM keys and its processes, it was noted by Liu et al. (2010) that the trust of TPM mainly lies in its capabilities for secure key management (i.e., key generation, storage, and use), and secure storage and reporting of platform configuration measurements.  Each TPM has a unique endorsement key (EK), which is generated by the chip manufacture.  Before using a TPM chip, users need to take the ownership of the chip and create a storage root key (SRK).  Both EK and SRK are RSA key pairs, and protected by storing their private keys always in the TPM chip.

The TPM contains an endorsement private key (EK) that uniquely identifies the TPM (thus, the physical host), and some cryptographic functions that cannot be modified. How this method works is by the agreeing companies endorsing the matching “public key” to certify the acceptability of the “chip and validity of the key” in question (Santos, Gummadi, & Rodrigues, 2009).


TPM Authorization Protocols


The Object Independent Authorization Protocol (OIAP), according to Ryan (2009) crafts a specific period of time that can affect any object; yet only works with specific directions. To understand how OIAP works, an authorization session which begins with the command TPM_OIAP is executed.  To set up an OIAP session, Ryan (2009) provided the following guidelines, “…the user process sends the order to the TPM simultaneously with a nonce argument.  Nonces that function under the user process are labeled as odd nonces, and nonces under the TPM format are labeled as even nonces.  When the TPM_OIAP obtains the authorization handle, the user process then comes together with the newly received even nonce.  After which, each command within the session sends the “authorization handle” as part of its process, introducing a completely different nonce that is now odd.  The response that is generated from the TPM now comprises of a nonce that is of an even nature.  All authorization Hash Message Authentication Codes (HMACs), contain the latest “odd and even” nonces as part of this process. When looking further into this technology and the OIAP session, the authorization HMACs are keyed on the authdata for the resource (e.g., key) requiring authorization.


Another feature in this process is the Object Specific Authorization Protocol (OSAP). This is where a session is ultimately created that controls an exclusive object that is specified when the session is originally set up.  When looking at OSAP and how it processes or works under TPM, Ryan (2009) again describes OSAP.  An OSAP session is created when the TPM receives the TPM_OSAP command with the name of the session key and an odd OSAP nonce. The reply incorporates the authorization handle, and an even nonce for the “rolling” nonces, as well as, an OSAP even nonce.  During this time period as described by Ryan (2009), the user process and the TPM each calculate a “secret hash value”, also known as the “OSAP secret”, which is comprised of the HMAC of the odd and even OSAP nonces within the session’s authdata.  At this point, instructions in the authorization session may be executed.  In an OSAP session, the authorization HMAC is set on the OSAP confidential value.  The rationale of this procedure is to authorize the user process to store the session key for possible prolonged durations during a particular session, without endangering the safety of the authdata on which this whole promise is based upon (Ryan, 2009).

An OSAP phase can also make use of numerous directives, yet these directives must direct a single object identified at the time the session was established.  The benefits of an OSAP session as mentioned by Ryan (2009), is so that it can be utilized for instructions that will present additional authdata to the TPM.  This OSAP phase can also make use of a number of rules, but the rules must handle a distinct object specified at the time the session was initiated.

Attestation Principle

TPM attestation is a function that allows system verification to take place by utilizing a remote party.  Validation is a key role in the design of TPM and supports the overall structure of a secure system.  This process exists to confirm no modifications have taken place that would deviate from a secure standardized process.  Attestation ensures that no deviations have taken place based on a standardized set of parameters called the Platform Configuration Registers (PCR) (Lioy, 2011).  The deviations attestation may detect include unauthorized software or hardware which can have hostile intent on the overall secure computing process.  The process begins with a platform that contains TPM and a verifier, also known as an appraiser.  TPM within the specified platform requires authentication, to confirm that the components within the system have not been modified, thus creating a trustworthy source.  The TPM possesses an endorsement key (EK) that is delivered to the verifier, which allows the verifier to authenticate the PCR (Segall, 2011).  If the verifier agrees that the platform’s configuration is secure, the verifier will grant an authentication key to the platform indicating a validated attestation.

Attestation Features

The method of attestation may be based on the needs of the organization that requires trust computing.  As attestation models differ, a baseline of relevant features should exist with each model.  According to Coker et.al (2008), attestation architectures should have specific principles to become ideal for use.  The principles may include:

•           Current Information

•           Comprehensive data

•           Limited Disclosure

•           Clear Logical Language

•           Trust Mechanisms

An attestation model that is constantly analyzing real-time information generated by the target system is able to further detect anomalies that could affect the decision of the verifier.  The analysis can be conducted by using system measurement tools that would provide a comprehensive array of information.  The system measurement tool will conduct research on the specific portions of the system.  For example, measuring an operating system’s kernel can help verify that the target was not a subject to a network-based attack (Coker, et al., 2008).    The full-state or attributes of a target may become a necessary feature to validate system contents enabling the attestation system to function.  The internal system data within a target is vital when it comes to the attestation decision making.  However, the full contents of the system state should be controlled by the target itself.  Having the target control the flow of system information will assist in minimizing unnecessary information disclosure, which could potentially expose users within a secure environment.  The target system delivering the encryption key must have a platform language that the attestation system can comprehend.   Moreover, the attestation system must be acknowledged by the appraiser and target system.  This allows all parties to identify the amount of variation between each of the systems.


Attestation Architecture

The design of an attestation system greatly differs based on the needs of the organization using the method to securely process transactions.  A synopsis of five general requirements can provide meaningful content through the principles of attestation (Coker, et al., 2008).  The requirements of an attestation system should include:

•           Measurement Mechanisms

•           Self Protection

•           Delegation thru Proxies

•           Decision Management

•           Target Separation

Architects should devise a comprehensive plan to configure the measurement tool within an appraiser’s system.  Each measurement tool is suited to handle specific target parameters, and will not be able to assess various details from other targets as each target contains a unique output set of information.  Each measurement tool will understand the boundaries and limitations of the target and with the proper configuration the tool will generate meaningful decisions during an appraisal.  In addition to building a sound measurement tool, the system also needs a mechanism to secure the measurement process to prevent unwanted deviations.  A preliminary baseline analysis of the measuring tool should be taken into consideration prior to conducting attestation procedures.  By establishing a baseline analysis, the attestation system will confirm the overall integrity to monitor target information.

Trust and credibility between target and attestation systems present an obstacle for designers.  The information passed from the target to the attestation system will include sensitive parameters that require the utmost protection.  An additional intermediary called an attestation proxy can be established to ensure that the target system delivers the appropriate information, while the attestation system receives the necessary amount of data to perform its decision making requirements (Coker, et al., 2008).  Therefore, trust by the target and attestation system falls on the attestation proxy, rather than having a direct-trust relationship, which would present conflict if the target information is fallible.

Several target values may deliver vast amounts of information to a single attestation system.  Since an attestation system may be required to produce values that correlate to each specific target, a systematic application such as an attestation manager may assist as a databank to perform complex decision making in various scenarios.  The manager can disable specific measurement tools to increase efficiency based on each target’s system state.  While an attestation manager is helpful in the decision making process, a target that contains corrupt information can be a sign of manipulation, which could cause unwarranted modifications to the attestation system.  To prevent the setback from taking place, designers may have the ability to implement virtually created systems that will protect the source attestation system from any potential modifications.  A separate virtual machine will create an additional boundary between targets and an attestation system, ensuring that the target’s configuration, whether valid or corrupt will not have any control over the appraiser’s measurement tools.

TPM Vulnerabilities

Existing vulnerabilities can void the credibility of certain TPM processes through hardware related modifications.  Although viable solutions exist for most vulnerabilities, each attack poses a great risk to the process, creating the possibility of unforeseen consequences.  The most recent version of TPM, version 1.2, addresses critical vulnerabilities found in version 1.1.  In version 1.1, a simple hardware attack is possible using a 3-inch insulated wire to reset a TPM bus, bypassing the protective measures of TPM’s auditing mechanism (Lawson, 2007).  The issue is resolved with TPM version 1.2., however other vulnerabilities continue to exist.

TPM is vulnerable to replay attacks, which can cause redundant processes to occur unnecessarily.  Specifically, the trust computing protocols, Object Independent Authorization Protocol (OIAP) and Object Specific Authorization Protocol (OSAP) are subjected to numerous probes including replay attacks (Bruschi, Cavallaro, Lanzi, & Monga, 2005).  The replay attack vulnerability is mitigated through the implementation of two nonce, hash key messages known as a rolling nonce protocol (Chen & Ryan, 2010).  TPM may also be subjected to the involuntary, extraction of secret keys which could pose a TPM authenticity as verifiers cannot “distinguish between real TPMs and fake ones,” known as rogue TPMs (Brickell, Camenisch, & Chen, 2004).  Viable solutions against rogue TPMs involve the utilization of an intermediary to guarantee the legitimacy of the TPM’s endorsement key.

Future of TPM

TPMs modules are currently inside 600 million PCs as the technology becomes increasingly popular.  In the future, TPM expects to distribute their technology to an additional 500 million machines to include major organizations throughout the world by 2013 (Berger, 2010).  The potential increase in TPM demand creates a widespread issue for manufacturers, as hardware suitability becomes a major setback.  The call to redesign future hardware is a viable solution that would handle new TPM version capabilities (Schoen, 2003).

As the newest version of Microsoft Windows 8 is presented to the public, a security feature labeled Unified Extensible Firmware Interface BIOS standard, a trusted boot mechanism, will be provided by the computer’s TPM (Ashford, 2012).  This component has the ability to measure the BIOS during secure boot, which can be reported through remote attestation to a party that can certify the validity of the BIOS from any possible deviations.  The Trusted Computing Group (n.d.) has released TPM 2.0’s library draft specification to the public.   The additions include:

•           Algorithm Enhancement

•           Improvements to TPM availability

•           Enhanced TPM management

•           Addition Cryptographic services for BIOS security



            The success of cybercrime is a testament to the myriad of ways criminals can infiltrate organizations and cause havoc.  Whether it is through vulnerable, malicious, or misconfigured programs, social engineering, physical theft, or electronic eavesdropping, an intruder only needs to find one weakness to exploit (Challener, Yoder, Catherman, Safford, & Van Doorn, 2008).  This leaves organizations with the daunting task of securing their machines and training their employees against an ever changing array of attacks.  TPM and Trusted Computing look to address all of these attacks in one comprehensive way through the use of key management, authorization protocols, and attestation.  When used to its maximal potential, the consumer can trust that their machines will boot up in a valid configuration, securely store data, identify what user is on what specific machine, and participate in secure protocols with uncompromised keys (Challener et al., 2008).  All of these features work towards the goal of frustrating the efforts of intruders and improving the security posture of an organization.  Being able to do this in a way that requires no in-depth interaction from the end user also means that training costs, insider threat risk, and the danger of a user introducing a threat into the organization is reduced.  Managing the security of end user’s machines from a central root of trust will allow administrators to have a reduced attack surface area to be vigilant of.  The potential impact of TPM truly extends from the top of the organization to the bottom.  With the availability of software that allows organizations to take advantage of TPM, its adoption should grow.

One important piece of software that will assist with the acceptance of TPM is Windows 8.  In Microsoft’s latest version of Windows, TPM has been integrated at many points to enable the setup and management of TPM to be as easy as possible.  An organization that is running Windows 8 along with Windows Server 2012 can take advantage of features such as automated provisioning and TPM management, measured boot with support for attestation, TPM-based virtual smart card, BitLocker network unlock, and TPM-based certificate storage (Microsoft, 2012). These features allow Windows users to take advantage of and integrate with the full suite of TPM protections from key management, to remote attestation, to tamper detection.  As more and more organizations migrate to Windows 8 their ability to adopt TPM as a security policy should grow as hardware to support TPM is also easy to procure.  All of this means that the barrier to entry for organizations to implement TPM has never been lower.  Because TPM is able to address security concerns from a hardware based perspective, it provides a unique ability to bolster an organization’s security posture.  As such, the acceptance of TPM should only increase as the software infrastructure continues to mature to support TPM hardware.

TPM is a wide ranging technology that encompasses many areas of technology.  Because TPM spans so many areas such as key management, remote attestation, and authorization it holds the potential to have a big impact on the way organizations secure their networks.  This ability is attractive to a wide swath of industries such as banks looking to ensure customers have the latest secure software before connecting to their networks, media companies wanting to enforce DRM, or companies seeking to protect sensitive information on laptops.  Although there may be some parties that may have reservations about TPM, their concerns predominantly speak to the effectiveness of TPM and the level of security it achieves.  If an organization does not have concerns regarding computer forensics or end user’s rights, then TPM is an attractive avenue to pursue.  While TPM’s hardware and protocols may have a few vulnerabilities, they have not been shown to be easily exploitable or have disastrous potential and should not serve as the impetus to forgo using TPM.  With all of the benefits mentioned and the continued maturation and expansion of capabilities, TPM’s applicability should grow as well.

The topics explored throughout the course of this writing provide a foundation for understanding what TPM is, what it provides, its strengths and weaknesses, as well as future growth.  These subjects should serve as a comprehensive basis of key current issues and developments in TPM.


Aaraj, N., Raghunathan, A., & Jha, N. K. (2008). Analysis and Design of a Hardware/ Software Trusted Platform Module for Embedded Systems. ACM Transactions On Embedded Computing Systems, 8(1). doi:10.1145/1457246.1457254

Ashford, W. (2012). Will this be the year TPM finally comes of age? Retrieved from http://www.computerweekly.com/news/2240157874/Analysis-2012-Will-this-be-the-year-TPM-finally-comes-of-age

Berger, B. (2010). Securing Data & Systems with Trusted Computing Now and in the Future.. Retrieved from http://www.trustedcomputinggroup.org/files/static_page_files/C71DF61F-1A4B-B294-D01538F6E3B1C39D/DSCI_InfosecSummit_2010%2010%2002_v2.pdf

Brickell, E., Camenisch, J., & Chen, L. (2004). Direct anonymous attestation. In Proceedings of the 11th ACM Conference on Computer and Communications Security, 132-145.

Bruschi, D., Cavallaro, L., Lanzi, A., & Monga, M. (2005, December). Replay attack in TCG specification and solution. In Computer Security Applications Conference, 21st Annual (pp. 11-pp). IEEE.

Burmester, M., & Mulholland, J. (2006, April). The advent of trusted computing: implications for digital forensics. In Proceedings of the 2006 ACM symposium on Applied computing (pp. 283-287). ACM.

Cabiddu, G., Cesena, E., Sassu, R., Vernizzi, D., Ramunno, G., & Lioy, A. (2011). The Trusted Platform Agent. IEEE Software, 28(2), 35-41. doi:10.1109/MS.2010.160

Chen, L., & Ryan, M. (2010). Attack, solution and verification for shared authorisation data in TCG TPM. Formal Aspects in Security and Trust, 201-216.

Coker, G., Guttman, J., Loscocco, P., Sheehy, J., & Sniffen, B. (2008). Attestation: Evidence and trust. Information and Communications Security, 1-18.

Halderman, J. A., Schoen, S. D., Heninger, N., Clarkson, W., Paul, W., Calandrino, J. A., & Felten, E. W. (2009). Lest We Remember: Cold-Bboot Attacks on Encryption Keys. Communications of the ACM, 5(2), 91-98.

Lawson, N. (2007). TPM Hardware Attacks. Retrieved from http://rdist.root.org/2007/07/16/tpm-hardware-attacks/

Lioy, A. (2011, October 16). Remote attestation. Retrieved from http://security.polito.it/trusted-computing/remote-attestation/ Gunupudi, V., & Tate, S. R. (2007, May). Random oracle instantiation in distributed protocols using trusted platform modules. In Advanced Information Networking and Applications Workshops, 2007, AINAW’07. 21st International Conference on (Vol. 1, pp. 463-469). IEEE.

Liu, D., Lee, J., Jang, J., Nepal, S., & Zic, J. (2010, December). A cloud architecture of virtual trusted platform modules. In Embedded and Ubiquitous Computing (EUC), 2010 IEEE/IFIP 8th International Conference on (pp. 804-811). IEEE.

Morrison, A. (2010, July 21). The social hacking of the un-trusted platform module (tpm). Retrieved from http://blog.morrisontechnologies.com/2010/07/21/the-social-hacking-of-the-un-trusted-platform-module-tpm/ Pearson, S. (2005). Trusted computing: Strengths, weaknesses and further opportunities for enhancing privacy. Trust Management, 91-117.

Ryan, M. (2009). Introduction to the TPM 1.2. DRAFT of March, 24. Retrieved from https://www.cs.bham.ac.uk/~mdr/teaching/modules08/security/intro-TPM.pdf

Sadeghi, A. R., & Stuble, C. (2004). Property-Based Attestation For Computing Platforms: Caring About Properties, Not Mechanisms. In Proceedings Of The 2004 Workshop On New Security Paradigms, 67-77.

Santos, N., Gummadi, K. P., & Rodrigues, R. (2009, June). Towards trusted cloud computing. In Proceedings of the 2009 conference on Hot topics in cloud computing (pp. 3-3). USENIX Association.

Schmitz, J., Loew, J., Elwell, J., Ponomarev, D., & Abu-Ghazaleh, N. (2011). TPM-SIM: A Framework for Performance Evaluation of Trusted Platform Modules. DAC: Annual ACM/IEEE Design Automation Conference, 236-241.

Schoen, S. (2003). Trusted computing: Promise and risk. Electronic Frontier Foundation, 16, 26.

Seagall, A. (2011). Attestation and Authentication Protocols Using the TPM. Retrieved from http://www.cylab.cmu.edu/tiw/slides/segall-attestation.pdf

Trusted Computing Group. (n.d.). TPM 2.0 Library Specification FAQ. Retrieved from Trusted Computing Group. Retrieved from https://www.trustedcomputinggroup.org/resources/tpm_20_library_specification_faq

Van Dijk, M., Sarmenta, L. F., Rhodes, J., & Devadas, S. (2007). Securing Shared Untrusted Storage By Using Tpm 1.2 Without Requiring A Trusted Os. Technical report, MIT CSAIL CSG Technical Memo, 498.


1 Comment

Denial of Service (DoS) Detection, Prevention, and Mitigation Techniques

Denial of Service (DoS) Detection, Prevention, and Mitigation Techniques

Author Amy L. Wees


Today most businesses host websites where customers can access their account information, employees can access timecards, conduct discussions, input customer information, track financials, and countless other activities. Without access to a network, productivity and profitability plummets. Denial of Service attacks aim large amounts of traffic at a server causing it to crash or become overloaded limiting access to legitimate customers. Denial of Service (DoS) attacks can do a lot of damage with little warning and much to recover for the victim (Goldman, 2012). For this reason it is imperative to detect, prevent, and mitigate DoS attacks where possible. This paper aims to summarize methods for DoS detection, prevention and mitigation based on the research of three separate sources.
Denial of Service (DoS) Detection, Prevention, and Mitigation Techniques
Corporations, schools, government agencies, and even home computer users conduct most of their business on a computer network by sharing information, resources, and files. This networking can be accomplished on a closed network or in most cases from one network or host to another via the Internet. As soon as information travels over the wire from one place to the next, it becomes vulnerable to interception, corruption, theft or misuse. Information entering a network from the Internet can also make an entire network and hosts vulnerable to computer viruses, Trojans, malicious malware, and a myriad of other dangerous possibilities.

Today most businesses host websites where customers can access their account information, employees can access timecards, conduct discussions, input customer information, track financials, and countless other activities. Without access to a network, productivity and profitability plummets. In September 2012, the websites of Bank of America, Wells Fargo, PNC, JP Morgan, and US Bank were inaccessible to customers for over a week during the largest reported Denial of Service attacks in history (Goldman, 2012). Denial of Service attacks aim large amounts of traffic at a server causing it to crash or become overloaded limiting access to legitimate customers. In the recent bank attack, large application servers were connected from various locations and used as a botnet to overwhelm the bank’s servers, resulting in an extended period of blocked access to customer financial information (Goldman, 2012). Botnets are often created from distributed computers which have been taken over without the user’s knowledge through the use of viruses or malware. Although this type of attack was thought to require a lot of preplanning; it was not very sophisticated and proves that Denial of Service (DoS) attacks can do a lot of damage with little warning and much to recover for the victim (Goldman, 2012). For this reason it is imperative to detect, prevent, and mitigate DoS attacks where possible.

This paper aims to summarize methods for DoS detection, prevention and mitigation based on the research of three separate sources. The research papers chosen for this summary are as follows:
1. A Taxonomy of DDoS Attack and DDoS Defense Mechanisms by Jelena Mirkovic and Peter Reiher
2. DDoS attacks and defense mechanisms: classification and state-of-the-art by Christos Douligeris and Aikaterini Mitrokotsa
3. Survey of Network-Based Defense Mechanisms Countering the DoS and DDoS Problems by Tao Peng, Christopher Leckie, and Kotagiri Ramamohanarao

These sources were selected because each uses similar methodology in analyzing DoS attacks. Each paper explores the different types of Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks, the tools available to both perpetrate and defend against attacks, and prevention or mitigation techniques. Where other research works focus on one technique for detection and prevention such as Internet Protocol (IP) trace back, packet filtering or flow control; the above listed works examine a broad range of practices for DoS and DDoS detection, prevention and mitigation based on the situations presented.

Attack Categories
There is not one clear path for detection DoS or DDoS attacks. Detection is very much dependent on the type of attack, the target and the perpetrator’s method. This paper will first describe the various types of attack and based on the sources chosen list various methods for detection, prevention or mitigation. Accidental denial of service such as a misconfigured computer or router will not be considered in this summary.

DoS attacks can be categorized by protocols such as network device layer, operating system layer, application layer, data flooding and protocol features (Douligeris & Mitrokotsa, 2004). Examples of these attacks, detection, prevention and mitigation strategies follow.

Network Layer Attack
At the network layer attacks target weaknesses in the software or hardware in devices such as routers. For example, Cisco 700 series routers are known to have a buffer overrun issue during the password checkout process. This weakness can be exploited by connecting to the router via telnet and entering lengthy passwords (Douligeris & Mitrokotsa, 2004). Routers can also be exploited by IP spoofing where IP packets containing forged information are sent to the router. Since the router has no authentication or trace-back mechanism the packets continue on their path with no way for the receiving target to detect what is happening or where that packets are coming from (Peng, Leckie, & Ramamohanarao, 2007). The problem is that spoofed packets can congest the bandwidth of the system allowing rogue and legitimate traffic; eventually denying service altogether.

Network Attack Detection
Router based attacks can usually be detected by monitoring the amount of traffic across the network. If the traffic is unusually high, this may be reason for concern. Those attacks which are launched at a numerous rates may be more difficult to recognize (Mirkovic & Reiher, 2004). A detection method called MULTOPS is mentioned by Mirkovic and Reiher (2004). MULTOPS will detect IP addresses that have been known to participate in DoS attacks and keep track of the packet activity per IP address. This allows the victim to possibly identify the attack source and filter or block the IPs immediately and in the future.

Network Attack Prevention and Mitigation
Mitigating attacks against routers or domain name service (DNS) servers can be accomplished through setting up secondary fail-over resources within the network design (Mirkovic & Reiher, 2004). To prevent attacks Peng, Leckie, and Ramamohanarao (2007) recommend packet filtering at the router to prevent spoofed traffic from entering the network with the caveat that filtering requires extensive deployment to be effective. Traffic should be filtered when entering and leaving the network and at each router along the way that way traffic that may be allowed to enter certain areas may be dropped throughout the process if it does not meet network criteria.

Operating System and Host Attacks
Attacks on operating systems utilize weaknesses in protocol execution. An example is the Internet Control Message Protocol (ICMP) flood. Since ICMP messages are usually broadcast to all machines in a network, an ICMP echo request can be sent to the broadcast machine and that request is forwarded to all network hosts, then when all hosts send the echo reply the traffic floods the victim’s network. This particular example represents a “smurf” attack (Peng, Leckie, & Ramamohanarao, 2007). Another ICMP related attack is the ping of death which sends echo requests larger than the maximum IP size which crashes the victim’s machine (Douligeris & Mitrokotsa, 2004).

The SYN flood attack exploits the three-way handshake required of a TCP connection to overload the memory of the targeted machine. The memory is overloaded because the attacker sends false IP addresses to the targeted machine and that machine stores the initial contact in its memory stack, waiting for a response to complete the data connection. Because a response will never be sent from the false IP the machine has too many half-open connections and the memory stack eventually times out (Peng, Leckie, & Ramamohanarao, 2007).

Operating System and Host Detection
To detect TCP SYN floods, Mirkovic & Reiher (2004) recommend using a standard detection strategy that is based on a rule-set that looks for half-open TCP connections allowing for deletion from the memory stack. Batch detection can also be used to detect SYN floods and captures statistical information about incoming traffic over time. When the traffic patterns change, an attack can be detected (Peng, Leckie, & Ramamohanarao, 2007).

Operating System and Host Prevention and Mitigation
Mitigation of the ICMP flood can be accomplished by disabling the automatic rebroadcasting service or reconfiguring the routers to forward only specified traffic. To prevent a SYN flood, the operating system can be set to limit the number of TCP connections waiting for response and eventually drop them after a timed period (Peng, Leckie, & Ramamohanarao, 2007). To further prevent TCP SYN attacks, protocols on host machines should be patched and updated often (Mirkovic & Reiher, 2004).

Application Layer Attacks
Applications on hosts can also be attacked based on their vulnerabilities. One instance given by the Mirkovic & Reiher (2004) is attacking an authentication server by sending phony signatures. The server will continue to function otherwise but any other application requiring authentication will be denied to the user.
A more commonly seen application level of attack is high traffic on a web site causing the web server to crash. This can be accomplished through a website’s search engine, forms, account request pages or number of simultaneous visits such as an HTTP flood. Because the Internet is utilized so heavily, most firewalls allow open traffic on port 80 (http) making it a prime target for attack. During an HTTP flood, distributed attackers, known as botnets, will flood the web server with requests. Most botnet software is designed to help attackers avoid detection by hiding IP addresses and pushing large files to sites taking up even more bandwidth (Peng, Leckie, & Ramamohanarao, 2007).

Detection of Application Attacks
Detecting application attacks is problematic because there is not a complete denial of service, the malicious activity level is very low and packets are not necessarily identifiable. In order to detect application level attacks Mirkovic & Reiher (2004) recommend monitoring each application in the intrusion detection system and screening regularly for suspicious activity. HTTP floods can be detected by looking for repeat requests for large files and then blocked by the server (Peng, Leckie, & Ramamohanarao, 2007).

Prevention and Mitigation of Application Layer Attacks
Douligeris & Mitrokotsa (2004) reference throttling as a mitigation tactic. Web servers which are overloaded can set router throttles so that all traffic passing through the router is limited to the throttle limit set. This prevents the web server from becoming overloaded and crashing. This also limits requests pushing large files (thus over the throttle rate) from reaching the server and allows legitimate requests through. The throttling method has not yet been proven in a large commercial setting.

Mirkovic & Reiher (2004) recommend overall system security to defend against DDoS atack. Ensuring system security is in place such as intrusion prevention and detection system, and security patching on all hosts. The idea is that attackers are able to gain control of zombie machines for botnets because so many machines are not secured properly. If simple security recommendations were followed the chances of attackers gaining control of such a militia of machines would be lessened and the level of attack subsequently lessened.

Denial of service and distributed denial of service attacks can happen at many different layers and levels within a network. The examples given in this paper only scratch the surface of what is possible. The sources used for this summary offer a wealth of information on the detection, prevention and mitigation of these attacks, all of which are significant in understanding the scope of the problem. Most importantly, they provide a way ahead for securing systems against specific attacks and make valid the difficulty in complete detection, prevention, and mitigation of denial of service attacks.



Douligeris, C., & Mitrokotsa, A. (2004). DDoS attacks and defense mechanisms: classification and state-of-the-art. Computer Networks, 643-666.
Goldman, D. (2012, September 28). Cnn money. Retrieved from http://money.cnn.com/2012/09/27/technology/bank-cyberattacks/index.html
Mirkovic, J., & Reiher, P. (2004). A Taxonomy of DDoS Attack and DDoS Defense Mechanisms. ACM.
Peng, T., Leckie, C., & Ramamohanarao, K. (2007). Survey of Network-Based Defense Mechanisms Countering the DoS and DDoS Problems. ACM Computer Surveys, 1-42.



iTrust Database Software Security Assessment

iTrust Database Software Security Assessment

Security Champions Corporation (fictitious) Assessment for client Urgent Care Clinic (fictitious)

Amy Wees, Brooks Rogalski, Kevin Zhang, Stephen Scaramuzzino and Timothy Root

University of Maryland University College

Author Note

Amy Wees, Brooks Rogalski, Kevin Zhang, Stephen Scaramuzzino and Timothy Root, Department of Information and Technology Systems, University of Maryland University College.

This research was not supported by any grants.

Correspondence concerning this research paper should be sent to Amy Wees, Brooks Rogalski, Kevin Zhang, Stephen Scaramuzzino and Timothy Root, Department of Information and Technology Systems, University of Maryland University College, 3501 University Blvd. East, Adelphi, MD 20783. E-mail: acnwgirl@yahoo.com, rogalskibf@gmail.com, kzhang23@gmail.com, sscaramuzzino86@hotmail.com and Chad.Root@gmail.com



The healthcare industry, taking in over $1.7 trillion dollars a year, has begun bringing itself into the technological era.  Healthcare and the healthcare industry make up one of the most critical infrastructures in the world today and one of the most grandiose factors is the storage of information and data.  Having to be the forerunner of technological advances, there are many changes taking place to streamline the copious amounts of information and data into something more manageable.  One major change in the healthcare industry has been the implementation of the Electronic Medical Record (EMR) systems.  Having risks and benefits, the electronic medical record systems will strive to provide and change the way healthcare industry will operate.  iTrust is a role-based health care web application.  Through this system, patients can see and manage their own medical records.  Medical personnel can manage the medical records of their patients including those provided by other medical personnel, be alerted of patients with warning signs of chronic illness or missing immunizations, and perform bio-surveillance such as epidemic detection.  Today, the gradual introduction of a of these electronic medical records lie at the center of the computerized healthcare industry and are slowly being implemented to provide modern technologies such as cloud database systems and cloud network storage as well as a way to streamline the medical data and patient information process.


Keywords: iTrust, database, cloud computing, software security, application security


iTrust Database Software Security Assessment

Security Champions Company is a software security company that specializes in assessment and analysis of software used primarily in the medical field.  Urgent Care Clinic has hired Security Champions to assess the primary cyber threats and vulnerabilities associated with the use of the open source electronic medical records software “iTrust”.  As much of the medical industry is moving toward electronic medical records (EMR), we want to ensure our client is in compliance with various stringent regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes-Oxley Act (SOX).  We will also provide a risk assessment and ease-of-attack threat analysis for several new requirements Urgent Care Clinic has requested to add to the iTrust software.  The following four requirements are reviewed and assessed:

  1. Add role for emergency responders to view patient emergency reports containing medical information such as allergies, current and previous diagnosis, medication and immunization history as well as blood type.
  2. Allow patients to search the database for qualified licensed health care professionals (LHCP) for specific diagnosis.  The patient will be able to view the doctor’s name, number of patients treated for the specified condition, laboratory tests requested and medication used to treat the diagnosis as well as patient satisfaction ratings.
  3. The third requirement is to update the diagnostics code tables to reflect new ICD-10 coding standards outlined by American Medical Association guidelines.
  4. The last requirement is to allow a patient to view the access log for their medical records in an online cloud database system.  This allows the patient to see what changes are made to their records and who made those changes.

iTrust Database Software Overview

iTrust is an open source software application created and maintained by engineers at North Carolina State University.  The software allows for medical staff from various locations to access patient records, schedule visits, order medications and laboratory tests, and view records, diagnosis and test results.  iTrust also allows patients to manage their care by viewing records, scheduling office visits, and finding health care providers in the area (UMUC, 2011).

iTrust Database Table Security Assessment

Each Champion Security teammate individually assessed the security of the various database tables in the iTrust database.  The tables were rated and are limited to the numerical choices 1,2,3,5,8,13,20,40,100 with 1 being the lowest security rating and 100 being the highest.  The Appendix A table represents each teammate’s individual values (noted by initials) and the average rating of those values combined in the highlighted column (see Appendix A: Table 1 – Database Table Value Points).


Analysis of New Requirements

The information age is growing exponentially, and the more resources and information that can be gained is critical.  This stands true to the medical field, particularly medical staff, emergency responders and patients.  Adding new requirements to the iTrust system allows for better care, medical attention, and informative information for the client.  These new requirements will enhance Urgent Care’s communication capabilities and allow for greater success.  By reviewing case-by-case scenarios regarding medical information and background information, these requirements benefit every aspect of Urgent Care’s Clinic.  The following analysis will provide more information on the new requirements.

Emergency Responder

Urgent Care Clinic is requesting four additional roles and allowable access to the iTrust healthcare cloud database system and application.  The addition of these roles and access points will be valuable to emergency responders and individuals who are seeking information for their own medical care instantaneously.  An emergency responder or a first responder is considered anyone who is qualified and certified in providing pre-hospital care prior to the patient entering the medical facility.  The need for these responders to have access to essential health information is necessary in order for them to provide the most appropriate and advance medical treatment in measures to save a patient’s life.  The responder could stabilize, treat and perform certain medical procedures on the patient according to their personal medical history (Department of Health & Humans Services, 2006).  The responder would need access to allergy, blood type, prescription history, medical history showing prior surgeries and prior diagnosis information.  This vital information is valuable when assessing individuals in the field.  The procedures that the responders are completing on the patient can then be documented in the iTrust system, so when the individual arrives at the hospital the attending medical staff can view what was done and can evaluate patient treatment from that point forward.

In order for the responder to get authorization and then be authenticated to the system, the use of a biometric control would be applied.  This authentication procedure would be extremely beneficial in the field due to the stress of the job.  The responder would use his fingerprint to gain access and then proceed to the required medical information.  The use of a password and user ID would slow response to the patient because the responder would have to remember that information.  If responders are unable to gain access and the proper medical care is not administered, this could lead to law suits or even death.  HIPAA would have a role in allowing these types of responders to gain access.  The patients could sign a HIPAA waiver during a doctor visit and have it kept in a database so access could be granted without hassle.

Find qualified licensed health care professional

            Allowing the patient access to search the iTrust cloud database system for a LHCP, would give them more control over their health care and enhance the quality of care that they receive based on their preferences and diagnosis.  This requirement will also give the user relevant information on where the medical facility is, how many people have been referred to the facility, what doctors are considered experts in a particular field, what procedures were used, and the satisfaction of the people that have been seen at that particular facility.  Allowing the patient to name their own medical preferences would also decrease man-hours for the staff normally responsible for these tasks.

Providing patients electronic health/medical information by means of a cloud database system makes way for streamlined care by providing the latest medical reports in an instantaneous manner, allows for rural individuals to gain access to specialized medical procedures, and may cut costs in certain healthcare facilities (Polito, 2012).  The advantage of being able to cut the number of patients in any single facility would allow for better care of patients, decreased wait times and a more precise diagnosis because each patient’s current medical and health information and patient history could be reviewed thoroughly and quickly by medical staff.  Finding the right health care professional would also allow the patient the opportunity to have predetermined questions to ask, have selective information prior to attending the visit, and be more advised on what to expect during the entire process.  This information could be very valuable to patients and medical providers because they would not waste their time on individuals that may not have a certain medical problem; in which case the patient would have to be referred to another doctor.  This access could provide more efficient and effective medical care.

Update diagnosis code table                

ICD-9CM is an outdated medical code system and the new internationally used code system is ICD-10.  This new code system needs to be updated in the iTrust medical application so that medical providers can accurately diagnose patients and medical staff knows what the history of the patient was.  Updating the coding system will provide proper analysis, quality management within the medical profession, increased productivity and overall compliance with medical regulations (Bounos, n.d.).  Entering the new codes would allow a patient to be seen at multiple facilities throughout the world and all medical care providers would understand the prior history.  Outdated material could cause errors in treating the patient and possibly cause severe physical harm to a patient. The ICD-10 has significant improvements which allow for diagnosis of symptoms to have fewer codes to describe the medical issue and information regarding ambulatory and managed care encounters (Centers for Disease Control and Prevention, 2012).

View Access Log

The last requirement that iTrust would like to make available to the patient is the ability to view access logs for their medical records.  The access log would provide information regarding who updated their medical information and when it occurred.  This could be very valuable to a patient as they can communicate with the facility regarding any discrepancies in their chart.  This might also act as a check and balance system between the patient and the provider, which could also assist in medical insurance billing and payment information.  For example, if a patient was diagnosed with ailment X, but the provider mistakenly coded ailment Y in the system, the insurance company may or may not cover the cost of the visit or associated procedures.  Allowing the patient to view the access log information can be provided to the insurance provider and the medical facility for correction.

Another advantage of allowing patients to view the access log is so they are able to see if someone compromised their medical information.  Being able to catch this breach early enough may allow law enforcement time to track the perpetrator before that information is used in a manner non-conducive to the patient.  Without access, the likelihood of a patient knowing their information has been tampered with is severely lessened.

HIPAA was enacted to ensure that personal medical and health information remains secure from others that could use the information wrongfully or intentionally against an individual.  HIPAA allows the patient more control over their personal information, applies limits on who can see the information and on what information is disclosed (Thacker, 2003).  This law itself provides the patient with access to their medical information and the ability to see what was logged in their records.

Applying the new requirements to the iTrust medical cloud database system allows responders, medical professionals and patients the ability to see information that could lead to a proactive sense of medical care.  The efficiency in how medical care is provided could save on medical care costs and make hospital visits more effective due to the limited number of individuals waiting to be seen. The patients will have the option to make the informed decision on which doctor they will see and also have more background information before they see any individual.


Ease of Attack

            The iTrust cloud database is relational and made up of tables that account for all the data processing needs of a medical office.  The tables record transactions and patient information.  Specifically, data is recorded for all patients, is considered personal health information and falls under the Health Insurance Portability and Accountability Act (HIPAA).   New requirements to the database will pose risks to the confidentiality, integrity and availability (CIA) of data if threats are not mitigated.

The following tables provide supplemental data that feed into the patient record and transaction history.  These tables include medical procedures (table: cptcodes), lists procedures performed at office visits and hospitals.  Hospitals (table: hospitals), lists hospitals in the system.  Diagnosis and immunization (table: icdcodes), lists diagnoses and immunizations with codes.  The standard medication codes (table: ndcodes) provides a list of medications.

Other tables in the system are relational and are linked to tables in the system through ‘id’ fields.  Allergies (table: allergies) links with the patient record listing allergy by type or description.  Lab procedures (table: labprocedure), provides information on what was performed during an office visit and is related to the patient, and office visit tables.  Login failure log (table: loginfailures) logs failures, records the date, time, and IP address.   Office visit table (officevisits) relates the patient id, hospital id, and office id to the office diagnosis (table: ovdiagnosis), office medication (table: ovmedication), office procedure (table: ovprocedure), office survey (table: ovsurvey) tables.  The patients table is a central table that contains personally identifiable information, and relates to the patient health information table, personnel, lab procedure, users, and transaction log tables.  Medication codes (ndcodes), office visits (table: officevisits), office visit diagnoses (table: ovdiagnosis), office visit medication prescription (table: ovmedication), office visit medical procedure (table: ovprocedure), office visit survey (table: ovsurvey), patients (table: patients), personal health information (table: personalhealthinformation), personnel (table: personnel), transaction log (table: transactionlog), and users (both patients and personnel called ‘users’).  Figure 1: Relational Design shows the relations between the tables.

Ease of attack is the calculation of valued risk by table (value points) on a scale of 1-100.  The value points show which table will be least attractive and which table will be most attractive to attack.  Ease points are calculated by determining the average value points for each requirement multiplied by the maximum value (or highest value) to obtain a security risk value.  The requirements are ranked by security risk, where a higher value means a higher ease of attack and a lower value means a lower ease of attack (see Appendix B: Table 3 – Security Risk).  The requirements in order of ‘ease of attack’ are the ability to view the access log of who has viewed their medical records by date, an additional role of emergency responder (ER) who will be able to see a ‘report’ of the patient that details vital medical information, the ability to query for a medical professional according to diagnosis and their zip code, finally an update to the diagnosis code for all diagnoses beginning January 1, 2010.  Ease of attack is calculated by a ranked risk of tables used by each requirement (see Appendix C: Table 3 – Security Risk).

The most vulnerable requirement is providing the patient the ability to view their access log.  The access log provides vectors of attack that allow the potential malicious user to take advantage  of the inference problem (Newman, 2009) to create a picture of internet protocol (IP) addresses within the network, the users medical identification (MID), and action.  A good configuration of the network will allow the attacker to focus in on subnets and eventually build an attack using database foot printing (McDonald, 2002) and network configuration.  Personnel and other patient’s user IDs could be vulnerable to being seen.  An attacker can infer what user IDs belong to certain personnel and eventually determine the level of access (i.e., Doctor, Administrator, etc.).



Figure 1: iTrust Relational Design

Allowing emergency responders the ability to pull a report that will show the patient’s vital statistics has several vulnerabilities.  The vital record statistics can be accessed from a police cruiser, ambulance and possibly through a smart-phone type device.  As this information is considered personal health information, there is a possibility that records can be left in the open, or accessed by any emergency responder regardless of an emergency.    A malicious user can glean a lot of knowledge for further attacks by inferring records combined with using an emergency responder access level.  Personal records provide information into other hospital or provider accounts that may be exploited to gain either more information or elevated access to other systems within iTrust.

The need to update diagnosis codes throughout the system implies that the access control level providing the ability has access to most all tables within the iTrust system.  The threat is low, all diagnoses must be coded with ICD-10 rather than ICD-9CM and saved to the patient and healthcare provider record.  A malicious attacker could use this attack vector to establish the database’s footprints as a platform for further attacks.

The hardest attack vector will be the ‘find qualified healthcare professional’ requirement.  A diagnosis and zip code are all that are needed to query the database to pull up potential healthcare providers in the patient’s locality.  If the malicious attacker has already exploited one of the easier vulnerabilities presented by the new requirement(s), data provided in a report could help determine how a database is structured.

The new requirements assume that the system as a whole is secure and has not already been breached.  The relational database’s design provides for efficiency in data processing and access.  The design also presents challenges to security if they are not mitigated.  A malicious attacker can infer a lot about the patients, personnel, and users within the database because of its relational design.  Security mitigations need to provide a level of confidentiality to ensure personally identifiable information is not vulnerable per regulations ensuring privacy.

Threats, Vulnerabilities, and Liabilities at Urgent Care Clinic

With the advancement of technology and the growing trends of enterprise networks, medical clinics like Urgent Care are becoming innovative and adopting new forms of database storage and network systems.  This means the implementation of a cloud database system or a form of cloud storage.  Cloud database systems are all the rage, and in a broad sense, they refer to virtual servers housed on the Internet used for storage of data.  The cloud database system focuses on increasing the capabilities and the capacity of network storage without having to invest in a new infrastructure.  It is a technology that utilizes remote servers to maintain data and applications that can be accessed by consumers and businesses at any time and from anywhere using the Internet (Gruman & Knorr, 2012).  It refers to many different computing models like Platform as a Service (PaaS)[1], Software as a Service (SaaS)[2], and Infrastructure as a Service (IaaS)[3].  However, implementing a medical cloud database system or utilizing Urgent Care Clinic’s iTrust cloud database services can have both positive and negative effects on data security and consumer availability.

Technological advancements and the unlimited accessibility involved with cloud storage, especially electronic medical records, opens additional avenues for vulnerabilities and threats against data, network systems, and company reputations (Trend Micro, 2011).  Cloud storage is here to stay and some very important threats need to be addressed in iTrust’s cloud database systems. With large amounts of data being transferred to the cloud storage servers, physical attacks are turning to network infiltrated attacks and the abusive use of the cloud through dishonest activities.  Because cloud computing and cloud storage deals with privacy and a seamless and easy registration system, criminals using new and advanced technologies are targeting weak registration systems and sliding under the limited fraud detection software.  This ranks as an extremely high security risk for both businesses and consumers utilizing the iTrust system.  The potential use of botnets has the ability to infiltrate a public cloud network and spread malware and viruses to thousands of computers.  This has already been seen in the real world with the “Zeus Botnet” attacking the Amazon cloud.  The “Zeus Botnet”, having infiltrated Amazon’s EC2 cloud computing service, installed a virus and took over complete command control of a high performance cloud platform (Cimpanu, 2009).  This malware caused a system wide outage while remaining hidden and transferring millions of dollars; it had to be dealt with.  In this instance and other similar instances, publicly blacklisting IaaS network addresses has been one way to combat and defend against spam and phishing.  To further defend against such risks, enhanced monitoring methods dealing with registration and initial validation, should be initiated. Whether implementing cloud analytics software or just more personnel for monitoring purposes, defending against such malware or botnets is a move in the right direction.

Based on the Cloud Security Alliance (CSA), another threat to cloud computing and virtual data storage is making sure that the security implications associated with usage and integrated into the service models are understood by consumers.  Relying on a weak set of application programming interfaces (APIs) exposes organizations to a variety of security related issues including availability, confidentiality, and accountability (Cloud Security Alliance, 2010, p. 9).  Rectifying this situation involves better authentication and encryption procedures on access controls.  Also, examining security models of iTrust’s data storage interfaces will help to reduce the susceptibility of attacks on a company’s cloud network. What does this mean for consumers?  As for the users and consumers of Urgent Care Clinic’s cloud network, this only helps and instills them with a sense of confidence when they are logging into the virtual storage network; their access information and registration will be kept confidential and secure.

Risks on Urgent Care’s end are also apparent, this means having a separation of duties so as to not run into malicious attacks from a single insider with too much power and access.  When ranking this threat among others, it needs to be at the top of the list.  The impact of a malicious insider can be devastating to users and have and even larger impact on the organization.  Although this has not been seen in the aspect of cloud computing and using virtual database systems today, insider attacks do happen.  Financial crash, productivity loss, and damage to the company’s integrity and reputation are just a few areas affected by malevolent insiders.  As the human element takes over and companies move toward virtual storage systems, it becomes critical for consumers to understand what policies and procedures are in in place to thwart such wicked attacks.  Urgent Care needs to enforce stricter supply chain management systems by incorporating a separation of duties as a way to bring checks and balances into the iTrust application.  Also, security breach notification policies need to be applied and employees need to be aware of their surroundings and report any atypical information or suspicious behavior.  Training and preventative measures go a long way in preserving company data and brand reputation.

ITrust cloud database users must also be aware of the issues revolving around shared technologies.  Whether this be virtual machines, management technologies, or communication systems, these shared technologies have never been set up for strong compartmentalization.  As a result, hackers or malicious individuals focus on how to influence the processes of other cloud database customers, and how to gain unlawful access to sensitive medical data.  For example, a zero-day attack or one that exploits computer application vulnerabilities, such as the blue pill technology, has the potential to spread rapidly across a public cloud and expose all the data within the server.  The “Blue Pill Technology” is a program written directly for a particular operating system that, if implemented, could embed malware into a system and go undetected until it is too late. Because this is a phishing style attack, implementation of security best practices with high-tech monitoring software will help prevent such things from happening.  In addition to monitoring software, enforcing some sort of service level agreement for vulnerability remediation and continual scanning and auditing will continue to help keep clinical data and the iTrust virtual storage system secure.

According to cloud computing and cloud database system standards, another risk that ranks up there with the other threats and vulnerabilities is the sense of security and protection of sensitive information and personal data.  Companies need to secure information and data through identity management.  Identity management has been described as the most essential form of information protection that an organization can use (Aitoro, 2008, Para. 3) and can be defined as the process of representing, using, maintaining, and authenticating entities as digital identities in computer networks (Seigneur & Maliki, 2009, p. 270).  Along with the need for identity management is the requirement for accurate auditing and reporting due to federally mandated, regulatory, and compliance directives such as Sarbanes-Oxley (SOX) and Health Insurance Portability and Accountability Act (HIPAA).  The Sarbanes-Oxley Act, enacted in 2002, is legislation designed to protect shareholders and the general public from accounting errors and fraudulent enterprise practices. The SOX act is governed by the Securities and Exchange Commission (SEC), which sets deadlines for compliance and publishes rules on requirements. Additionally, Sarbanes-Oxley outlines which records are to be stored and for how long (Spurzem, 2006, Para 1).  The Health Insurance Portability and Accountability Act (HIPAA) provides federal protections for personal health information held by covered entities and gives patients an array of rights with respect to that information. Furthermore, the HIPAA Privacy Rule is balanced so that it permits the disclosure of personal health information needed for patient care and other important purposes.  The HIPAA Security Rule specifies a series of administrative, physical, and technical safeguards for covered entities to use to assure the confidentiality, integrity, and availability of electronic protected health information (U.S. Department of Health and Human Services, n.d., Para. 1).  As a direct result of the organizational needs for both identity management and accurate federal reporting, identity management systems were developed that provide the ability to log, control, audit and report on end user access to particular information assets and serve as the foundation of an organization’s threat and overall compliance strategy (DeFrangesco. 2009, Para. 1-2).  Identity management systems are designed to create processes, which address the five fundamental aspects of identity management — Authentication, Authorization, Accountability, Identification, and Auditability (UMUC 2, 2010, p. 5).

Additionally, there are many ways in which data can be compromised and not having a backup of sensitive material remains the biggest fault among users and organizations.  Because data loss can have an overwhelming negative impact on a business, it is in the best interest of the company to provide proper policies and hardware for data duplication.  Not having data backup in place only renders a company’s information unrecoverable.  The threat that data will be compromised in the cloud increases due to the number of and interactions between risks and challenges which are either unique to virtual database systems, or more dangerous because of the operational characteristics of the virtual cloud environment (Cloud Security Alliance, 2010, p. 12).  Besides the company’s reputation and integrity being compromised, there is a significant negative influence on customer morale and trust. A company is only as good as the quality of work it produces and when data is leaked or lost, users are not happy.  Cloud information systems and implementing virtual data storage revolves around the ability to access sensitive data and personal information at any time and if this service were to go down or be compromised, a company’s reputation will be ruined and a significant financial impact will be placed on the organization.  Even worse, depending on the incident, a company might incur legal ramifications for possible compliance violations.  For an organization to avoid severe occurrence of data loss, multiple backups should be in place and the data being stored on the network needs to be encrypted so it can be secure in transit.  Not only will this provide a sense of data integrity for Urgent Care, it will offer peace of mind to the consumers utilizing the iTrust virtual data storage system.

Another issue that remains a legitimate threat to iTrust users is account service and traffic hijacking.  Ranging from phishing to spam, stolen user credentials or mobile devices allow hackers to infiltrate full company networks.  With sensitive data being hosted on virtual servers, hackers have an all access pass to everything just by gaining simple entrance, user login information, or an unmonitored mobile device.  Because untrained and gullible employees remain the easiest point of entry, an attack on passwords, devices, and user credentials remain at the top of the charts.  If an attack were to happen on the Urgent Care network, the hacker would have the ability to monitor transactions, manipulate data, and steal personal customer information all at the click of a button.  Preventative measures must be taken by applying password policies, tracking software, and Internet usage information to all employees.  Employees must keep personal information and credentials to themselves and appropriate monitoring software must be introduced to oversee all activity within an organization.  “Organizations like Urgent Care should be aware of these techniques as well as common defense in depth protection strategies to contain the damage resulting from a data breach” (Cloud Security Alliance, 2010, p. 13).

Finally, when adopting Urgent Care’s virtual storage network service, it is important to provide users with PCI compliant software services.  Standards, compliance of internal security procedures, or the information that might be disclosed after an incident occurs, tend to be overlooked and cause and unknown risk profile when moving ahead with cloud computing.  Because companies want to move forward with virtual network storage due to the low costs and other benefits that come with implementation, often these overlooked questions, like how is data being stored or who has access, may lead to serious malicious threats.  Unknown risk profiles can better be understood when analyzing the Heartland Data Breach.  In May of 2008, Heartland Data Center, the fifth biggest payments processor in the United States, was hacked into using known-vulnerable software.  This known vulnerable software came packed with a few loopholes to allow hackers to embed a data sniffer, capturing credit card information, card numbers, expiration dates, and internal bank codes, to allow them to duplicate cards and steal customer and business finances (Slattery, 2009).   Once Heartland knew about the issue they only took minimal steps to rectify the situation.  Heartland did not take the extra effort to comply and notify every single user that was affected.  Rather, they were only willing to do the bare minimum to comply with state laws.  If an organization is to learn from past mistakes and take anything from the Heartland Data Breach, then it would be to go above and beyond the bare minimum to not only be in compliance with state and federal laws, but to contact every affected user while incorporating proper incident response procedures.  Not abiding by these rules or taking the extra step to have a proper incident response plan can cause Urgent Care’s reputation to take a dive and will in turn, have negative ramifications on existing and future customers from utilizing the iTrust database service.  Chris Whitener, Chief Security Strategist for Hewlett-Packard, said, “companies should not jump into the cloud or virtual network storage without a proper risk assessment” (Mimoso, 2010).  Organizations need to be aware of the risks and evaluate the vulnerabilities as needed.

In summary, the cloud and virtual database storage is, and will continue to grow and be part of the critical infrastructure of many businesses such as Urgent Care, and so must the security and response policies and procedures be considered when migrating to the iTrust virtual storage system.  “This role is likely to grow as a multitude of new services are developed and commercialized and users’ level of familiarity and comfort, with this approach to service delivery, develops and grows” (Kate, 2011).  Companies are in it for the cost and benefits that can be gained from cloud computing and virtual storage systems but they should be focused on the consumer and end-user aspect of the business.  This is what is going to drive a company to the next level.  Ultimately, the end-user is the one experimenting and taking the risk by providing a facility such as Urgent Care with sensitive personal data.  Organizations taking the next step to secure, monitor, and regulate the information housed on the virtual database network are the ones more likely to give peace of mind to the end-user.  “From this study of current cloud computing and virtual storage practices and inherent risks involved, it is clear that at present there is a lack of risk analysis approaches in the cloud computing environments. A proper risk analysis approach will be of great help to both Urgent Care and their patients. With such an approach, patients and staff can be guaranteed data security and Urgent Care Clinic can win the trust of their customers” (Angepat & Chandran, 2012).  Cloud computing serves as an ever-growing technology for storage and data processing and the threats and vulnerabilities involved to hang by the wayside.  It is and will always be in the best interest of a company like Urgent Care to test these threats and make changes above and beyond expectations.  If Urgent Care’s integrity is compromised, then what else is there? Nothing.  It is in the best interest of any organization, to make an effort to fortify their cloud network and take into consideration the threats focused on in this paper, to give ample knowledge to defend against attacks and beef up security.

Changes to Security Management Policies

With the inclusion of the new requirements, changes will have to be made to the security policy in order to reduce risk. These changes can come over time, but it must also take as little time as possible. The first step is to improve authentication protocols, such as using stricter password requirements or PKI-based authentication (Katsumata, Hemenway, & Gavins, 2010). Admittedly, a more stringent password requirement may be more of a hassle for patients. On the other hand, employees should be expected to have strong passwords or utilize the PKI system. Cost may factor into this change, and indeed the integration of a PKI system can cost up to $1,000,000 (Katsumata et al., 2010). However, the risk reduction is far higher for PKI compared to passwords, and the cost-to-benefit ratio is much lower in comparison (Katsumata et al., 2010). Security is an investment, not only to the company, but also its users.

The inclusion of regular audits can help improve and refine access controls, making sure that employees have the correct authentication and patient access is both secure and unencumbered (Sommer & Brown, 2011). Penetration testing can find weaknesses in the system as the new roles are established and the entire system changed to incorporate the new security measures (Sommer & Brown, 2011). Lastly, plans for disaster recovery and mitigation should be prepared. Even with all the latest technologies and best policies, there is always the chance that someone will have the luck or skill of breaking through all the barriers. As such, having contingency plans can help reduce the impact of a security breach (Sommer & Brown, 2011). Data redundancies and system technicians trained and prepared for such a crises can help mitigate damage considerably (Sommer & Brown, 2011).

Each requirement needs to be fine-tuned over time for any potential leaks and security hazards. For example, accessing system logs would require strong authentication and regular audits. The audit itself can be a universal security check-up on each requirement, as the audits seek out both weaknesses and discrepancies in the system (Sommer & Brown, 2011). Be it patients’ access rights, emergency responder accesses, or system administrator access, each profile must be scrutinized for properly configured permissions and access controls. Of the three requirements, updating the diagnosis code table has the lowest priority. Authentication is still necessary, but the team had decided that it was unlikely to be a target for attack. Coming to this conclusion was most definitely a team effort.

Reaching consensus over prioritization of security issues was a surprisingly uncomplicated task. Each member of the team reviewed the iTrust addendums and filled out the tables per individual opinion. Individual tables were collected and the values averaged. This way, every team member’s opinion is taken equally and fairly. Fortunately, while there were minor differences on security values and ease of attack points, all team members had very similar tables regarding prioritization. Every team member agreed that certain tables, such as cpt, hospitals, icdcodes, and ovprocedure, were not of high value for attacks. Similarly, the team had similar opinions in that the patient, personalhealthinformation, personnel, and users tables were of the highest values, and thus, the most likely to be attacked. Despite the new requirements needing different access levels to different tables, the team determined that all new roles are equally viable, and highly vulnerable, to attack. This was because of number of tables each role needed access to; each requirement would access a high-risk table at some point. As a result, the team agreed that all new requirements were at high risk for attack. Lastly, using the ease-of-attack value combined with the asset value, the team was able to prioritize the security issues.


There are always lessons to be learned when reevaluating an existing security policy. It is foolhardy to blindly set up new requirements and roles without properly assessing the risk factors these new roles may introduce. Rather, it is important to examine both the new requirements as well as currently established roles and determine the level of risk they represent. By looking at this objectively, we can produce a priority list.  In establishing a priority list, we are better able to relegate appropriate resources to protect particularly vital data tables without compromising the overall security of the network. Since these modifications will reflect on security as a whole, we must be careful in making these changes. Ensuring compliance with federal standards is a fantastic first step in the right direction, but we must also look to exceeding these minimum requirements. This leads to establishing trust between provider and client, and trust is what builds successful relationships. An important lesson learned is to make certain that we both deserve and can hold onto the trust of clients, and an excellent way to do so is to make their data secure.


Appendix A: Table 1 – Database Table Value Points

Table Value (SS) Value (TR) Value (KZ) Value (BR) Value


Use in Requirement #
allergies 20 15 1 20 20  1,3
cptcodes 1 10 3 3 3 2,3
hospitals 5 5 5 5 5 1,2,3
icdcodes 5 20 5 5 5 1,2,3
labprocedure 13 70 40 40 40 2,3,4
loginfailures 40 40 20 20 40 3,4
ndcodes 1 50 3 3 3 1,2,3
officevisits 8 4 20 8 8 2,3,4
ovdiagnosis 20 60 40 20 20 1,2,3
ovmedication 40 3 13 20 40 1,3
ovprocedure 1 30 2 1 1 2,3,4
ovsurvey 1 2 1 1 1 2,3
patients 100 80 100 100 100 1,3,4
personalhealthinformation 100 40 40 100 100 1,3,4
personnel 100 90 100 100 100 2,3,4
transactionlog 13 1 20 20 20 4
users 20 100 100 40 100 3,4
longtermdiagnosis n.d. 40 1 n.d. 40 1,3
shorttermdiagnosis n.d. 60 3 n.d. 60 1,3




Appendix B: Table 2 – Database Tables Used by Requirement

Requirement Table(s) Used (Consensus) Average Value Points of Each Table Max Value Average
1:  Add role:  emergency responder. Allergies








personalhealthinfo shorttermdiagnosis











2:  Find qualified licensed health care professional. Cptcodes




















3:  Update diagnosis code table. Allergies






































4:  View access log.  Labprocedure




Personal health info


















Appendix C: Table 3 – Security Risk

Requirement Ease of Attack Points (Average) Average Max Value of Asset Points Security Risk Rank of Security Risk
1:  Add role:  emergency responder. 39.3 100 3930 2 (based on higher ranking average)
2:  Find qualified licensed health care professional. 18.6 100 1860 4
3:  Update diagnosis code table. 37.15 100 3715 3
4:  View access log.  63.5 100 6350 1



Aitoro, J. (2008). Identity Management. Retrieved from: http://www.nextgov.com

Angepat, M., & Chandran, S. P. (2012, October 27). Cloud Computing: Analysing the risks involved in cloud computing environments. Retrieved July 29, 2012, from Cloud Computing: School of Innovation, Design and Engineering: www.idt.mdh.se/kurser/ct3340/ht10/…/16-Sneha_Mridula.pdf


Bounos, M. (n.d.). Evaluating computer assisted coding systems & ICD-10 readiness. Wolters Kluwer Law& Business.  Retrieved from http://www.mediregs.com/files/1007-1/WKLBEvaluatingCADICD10.pdf


Centers for Disease Control and Prevention. (2012). International classification of diseases, tenth revision clinical modification.  Classification of Disease, Functioning, and Disability.  Retrieved from http://www.cdc.gov/nchs/icd/icd10cm.htm


Cimpanu, C. (2009, December 10). Zeus Botnet Infiltrates Amazon’s Cloud. Retrieved July 29, 2012, from Softpedia: http://news.softpedia.com/news/Zeus-Botnet-Infiltrates-Amazon-s-Cloud-129438.shtml


Cloud Security Alliance. (2010, February 24). Top Threats to Cloud Computing V1.0. Retrieved July 29, 2012, from Cloud Security Alliance: http://www.cloudsecurityalliance.org/topthreats/csathreats.v1.0.pdf


DeFrangesco, R. (2009). Identity and Access Management as an Audit Tool. Retrieved from: http://www.itbusinessedge.com


Department of Health & Humans Services. (2006). Emergency responder electronic health record. Officer of the National Coordinator for Health Information Technology.  Retrieved from healthit.hhs.gov/…/EmergencyRespEHRUseCase.pdf


Gruman, G., & Knorr, E. (2012, February 29). Retrieved July 29, 2012, from Cloud Computing: Info World: http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031


Katsumata, P., Hemenway, J., & Gavens, W. (2010). Cybersecurity risk management. The 2010 Military Communications Conference – Unclassified Program. Retrieved from


Kate. (2011, June 7). Securing Your Data In the Cloud: An Insiders Perspective. Retrieved July 29, 2012, from Kate’s Comments: http://www.katescomment.com/securing-data-in-the-cloud/


Mimoso, M. S. (2010, March 1). Cloud Security Alliance releases top cloud computing security threats. Retrieved July 29, 2012, from Tech Target: Search Cloud Security: http://searchcloudsecurity.techtarget.com/news/1395924/Cloud-Security-Alliance-releases-top-cloud-computing-security-threats


McDonald, S. (2002, April 8). SQL Injection: Modes of attack, defense, and why it matters. Retrieved July 28, 2012, from Sans.org: http://www.sans.org/reading_room/whitepapers/securecode/sql-injection-modes-attack-defence-matters_23


Newman, R. (2009). COMPUTER SECURITY: PROTECTING DIGITAL RESOURCES. Sudbury, MA: Jones and Bartlett Publishers International.


Polito, J. M. (2012). Ethical Considerations in Internet Use of Electronic Protected Health Information. Neurodiagnostic Journal, 52(1), 34-41.


Seigneur, J-M. & Maliki, T. (2009). Identity Management. In Vacca, J.R. (Ed.), Computer and information security handbook. Boston, MA: Morgan Kaufmann Publishers.


Slattery, B. (2009, January 21). Heartland Has No Heart for Violated Customers. Retrieved July 29, 2012, from PC World: http://www.pcworld.com/article/158038/heartland_has_no_heart_for_violated_customers.html

Sommer, P., & Brown, I. (2011). Reducing systemic cybersecurity risk. Organisation for Economic Cooperation and Development. Retrieved from http://papers.ssrn.com


Spurzem, B. (2006). Sarbanes-Oxley Act (SOX). Retrieved from: http://searchcio.techtarget.com


Thacker, S. (2003). HIPPA privacy rule and public health. Center for Disease Control and Prevention.  Retrieved from



Trend Micro. (2011, August 23). Security Threats to Evolving Data Centers. Retrieved July 29, 2012, from Virtualization and Cloud Computing: www.trendmicro.com/cloud…/rpt_security-threats-to-datacenters.pdf


U.S. Department of Health and Human Services. (n.d.). Health Information Privacy. Retrieved from: http://www.hhs.gov


University of Maryland University College. (2011). CSEC 610: Cyberspace and Cybersecurity, Interactive Case Study II. College Park, MD, USA.

UMUC. (2012). Module 9: Virtualization and Cloud Computing Security. Adelphi, MD, USA. Retrieved July 23, 2012, from http://tychousa5.umuc.edu/cgi-bin/id/FlashSubmit/fs_link.plclass=1206:csec630:9042&fs_project_id=389&xload&ctype=wbc&tmpl=csecfixed&moduleselected=csec630_09

[1] “Platform as a Service (PaaS) is a way to rent hardware, operating systems, storage and network capacity over the Internet” (TechTarget, PaaS, 2012).

[2] “Software as a Service (SaaS) is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network” (TechTarget, SaaS, 2012).

[3] “Infrastructure as a Service is a provision model in which an organization outsources the equipment used to support operations, including storage, hardware, servers and networking components” (TechTarget, IaaS, 2012).

, , ,

Leave a comment

Protection of Network Operating Systems






Protection of Network Operating Systems

Amy Wees


15 July 2012




Operating systems are essential to business operations, system security and software applications. Users count on operating systems to provide easy to use graphical user interfaces (GUI), operate multiple applications at one time, and store and access data and information needed for everyday operations (UMUC, 2011).  Businesses count on operating systems to address and provide for the four basic security concerns of confidentiality, integrity, availability and authenticity (Stallings, 2011).  Although many operating systems have incorporated controls to address these security concerns, there are additional measures that need to be taken to ensure the necessary level of security is achieved. Identification and Authentication protection measures are the most significant measures to implement.  Before a user or administrator is allowed to access the system, security measures must be implemented to identify and authenticate the need for and level of access.  After personnel are identified and authenticated, access control policies must be implemented to ensure limited access to applications, information, computers and servers on the network.  Internal to external and external to internal communications must also be protected and restricted.  Drafting and enforcing effective security policies and conducting annual audits allows for vulnerability assessment and correction of weaknesses in configuration, training, or procedures.  The protection measures noted in this paper are rated in severity based a case study on auditing UNIX systems by author Lenny Zeltzer (2005).




Keywords: firewalls, security training, operating systems, security policies, password, access control, security management

Protection of Network Operating Systems

Operating systems are essential to business operations, system security and software applications.  Operating systems allow administrators to control access to the system, install and configure third party commercial-off-the-shelf (COTS) software and monitor activity with built in auditing tools.  Users count on operating systems to provide easy to use graphical user interfaces (GUI), operate multiple applications at one time, and store and access data and information needed for daily operations (UMUC, 2011).  Businesses count on operating systems to address and provide for the four basic security concerns of confidentiality, integrity, availability and authenticity (Stallings, 2011).  Although many operating systems include built in controls to address these security concerns, additional measures should be taken to ensure the required level of security is achieved.  This paper will address the implementation, advantages and disadvantages, and security management issues of three protection measures: Identification and Authentication, Access Control, and Security Policies and Auditing (Information Assurance Directorate, 2010).


Security Ratings and Prioritization

            The protection measures noted in this paper are rated in severity based a case study on auditing UNIX systems by author Lenny Zeltzer (2005).  A high severity rating is one in which could result in an attacker or intruder gaining root level access to a system leading to potential loss of critical data.  A medium severity rating is given to vulnerabilities that could result in remote nonprivileged access to the system.  A low security rating is that related to events which are improbable and may result in a local attacker gaining nonprivileged access to the system (Zeltzer, 2005).  The measures listed in this paper are rated as follows:

Measure Rating
Identification and Authentication protection measures High
  1. Badge Access Control System
Access Control High
  1. Host Based Firewall
  1. Network Firewall
  1. Use of a DMZ
  1. Limiting Access to Data using Least Privilege & Separation of Duty Principles
  1. Enforcing strong password policies
Security Policy Medium
  1. Drafting Effective Security Policies
  1. Security Awareness Program
  1. Security Auditing



Identification and Authentication

            Identification and Authentication protection measures are the most significant measures to implement.  Before a user or administrator allowed access to the system, security measures must be implemented to identify and authenticate the need for and level of access.  Pre-employment background checks can prevent organizations from hiring individuals with criminal records and verify qualifying information on a candidate’s resume (Mallery, 2009).  A popular method for controlling identification and authentication is by utilizing access badges.  Access badges can be linked to security systems and control and monitor physical access to the facility, to rooms within the facility and most importantly logical access to the systems that contain proprietary sensitive information.  Access badges also provide employees a visual tool for monitoring levels of access, job titles, and recognition of visitors.

Today many different types of access badge systems are available.  An organization must weigh the cost of the system with the benefit to security.  Smart card systems are relatively easy to implement offering a multitude of vendors and interoperability with legacy systems.  After the user has verified his or her identity using a passport or drivers license and a representative of the company has verified the users’ required level of access, the user can be issued a smart card where he or she sets a pin number to be used from that point forward to verify his or her identity and authentication for physical and logical access into the facility (Smart Card Alliance, 2003).

Management of the system will require information assurance professionals who can conduct background checks and verify identities as well as control and administer the computer applications associated with the system.  The organization will also need to prepare for possible outages of the system and develop procedures for training employees to identify badges, escort unauthorized individuals and properly wear, use and store badges.

Utilizing smart card technologies removes the need to verify identify on a daily basis and also allows for ease of monitoring of a person’s whereabouts.  Access changes can also be made remotely from the management software application if an employee switches jobs, loses their badge, or leaves the company.  Smart cards can be used for physical and logical access and such access can be limited throughout the facility.  Smart cards can also limit the number of passwords an employee has to remember, decreasing man hours spent on password resets and locked out systems.  Although the advantages are many, access badge systems can be costly, and a strong social engineer may be able to outsmart the system by replicating a badge or fooling an employee in to granting them access they should not have.


Access Control

            The second most critical security measure is access control.  After personnel are identified and authenticated, access control policies must be implemented to ensure limited access to applications, information, computers and servers on the network.  Internal to external and external to internal communications must also be protected and restricted.

A firewall is one of the best mechanisms to protect the network from internal and external threats and control as well as monitor communications.  The Windows operating system offers an integrated firewall for use on clients, which drops incoming solicited traffic that is not in response to a request made by the computer, and allows specified unsolicited traffic. Host based firewalls such as the Windows Firewall safeguard against threatening applications that utilize unsolicited traffic as an attack mechanism (Microsoft , 2012).  The network firewall should be attached directly to the internet connection, to block malicious traffic from entering the network.  Network firewall software can be installed on a dedicated server located between the internet and the protected network (Goldman, 2006).  Firewalls can filter and monitor incoming traffic and also protect against insider threats such as users clicking on phishing e-mail links or navigating to dangerous websites.  Goldman (2006) notes research shows that seventy to eighty percent of malicious activity comes from insiders who already have network access.  Although firewalls are an advantageous method of protection, they can cause more damage if not configured properly or if maintained by administrators that do not understand the complex rules or monitoring procedures.  Firewalls must also be combined with other protection strategies such as vulnerability assessment tools, intrusion detection and prevention systems and antivirus tools (Goldman, 2006).

The physical location and configuration of assets on the network is also vital to access control on the network.  For example, a demilitarized zone or DMZ is a controlled area for the most vulnerable systems on the network.  If a user is hacked or a system is infected the DMZ prevents interruption of essential functions such as e-mail and databases (Turner, 2010).

Password, user and administrative access policies are equally essential to protecting the network and clients from outside and inside threats.  The level of access a user requires must first be determined using the principle of least privilege.  Files and information should be separated by roles or departments within an organization and access given only to those assigned in those roles or associated with that department.  Limiting data access also decreases the possibility of an intruder gaining access to critical files.  Administrative access should also be limited to the roles and responsibilities of the administrator.  Full administrative access to the network should be given on an extremely limited basis following a separation of duty policy.  Password policies should be understood by all users and administrators, and Windows active directory configured to enforce policy.  Studies have shown the most secure password policies are those that require a 14 character password comprised of at least two uppercase and lowercase letters, two numbers, and two unique characters.  Passwords should be changed every 60 days and screen saver passwords enforced to prevent intruders from accessing open systems (Turner, 2010).  An excellent prevention and education measure to enforce the use of strong passwords is to run a password cracking application such as L0phtcrack against the password database using a keyboard progression dictionary often used by crackers.  If passwords are cracked, users should be notified and forced to change their passwords.  Training in this way helps users and administrators learn to create and maintain strong passwords, and understand how easily weak passwords can be exploited for malicious purposes.


Security Policies and Auditing

The likelihood of a business falling victim to cyber-attack becomes more prevalent as more and more businesses utilize technology to conduct operations and store critical information.  Attacks can cause severe financial losses to businesses and customers and destroy reputations.  Research has shown that most security breaches are not due to misconfiguration of firewalls or poor password policies, but caused by inadequate security planning (Hamdi, Doudriga, & Obaidat, 2006).  Drafting and enforcing effective security policies and conducting annual audits allows for vulnerability assessment and correction of weaknesses in configuration, training, or procedures.  The security policy should be based on business objectives and detail security measures for information systems, operating systems, and key management in the business environment and document procedures for handling security incidents.  Security policies can also be multifaceted and separated by audience (such as technical versus end-user policies), or separated by issues (such as information classification and access control policies).  At a minimum, the security policy should address access privileges, user accountability and responsibility, authentication procedures, availability and maintenance of resources, and procedures for reporting violations (Hamdi, Doudriga, & Obaidat, 2006).

Enforcing securing policies requires awareness programs and employee training.  Employees should feel they are stakeholders in the security of the organization.  Policies should be widely disseminated, easy to understand and follow, and retrained on a regular basis.  Employees should know how to recognize and respond to security incidents.  The effectiveness of a security policy can be assessed using simple tests such as a contingency plan or emergency response practice drill (Hamdi, Doudriga, & Obaidat, 2006).

Conducting regular vulnerability assessments and audits of an organization’s security posture will help to ensure weaknesses in operating systems, third party applications, and security policies are identified.  This is best accomplished by hiring a third party to conduct an audit.  Security professionals are trained on many different systems and can educate staff on vulnerability management.  Audits can include penetration tests, which can assess the external security of the network, or a less invasive vulnerability assessment to scan the system for threats and provide fix actions (Mallery, 2009).  If the organization decides not to outsource the audit, there are other options for scanning the network using tools such as the Nessus vulnerability assessment tool as well as employing intrusion detection and prevention systems and antivirus.  The benefits of utilizing in-house tools are that they are always available and can often automatically assess and mitigate vulnerabilities.  The drawbacks are that employee training to maintain such systems can be extensive, and systems can be costly (Kakareka, 2009).  After audits are conducted it is paramount to set a time frame in which to accept risks, remedy vulnerabilities, and update security policies and other relevant documents.



            Businesses rely on network operating systems as an effective way to control, manage and secure their operations with ease.  Effective security of operating systems requires a defense in depth strategy that goes beyond what is inherent to the operating system.  Businesses must identify and authenticate employees using background checks, physical security procedures such as badging systems.

After identification and authentication, access to assets is best controlled using the principles of least privilege and separation of duties.  User and administrator access to shared electronic data folders and applications should be separated and limited by function or role.  Firewalls, DMZs and physical separation of assets can be utilized to protect the network from unwanted incoming and outgoing traffic and malicious actors.  Strong password policies and practices can also assist in protecting the network and preventing unauthorized access.

Finally, drafting a strong security policy based on risk analysis and business objectives and confirming employees have a clear understanding of policies and procedures will go a long way in developing a security culture in the organization.  Conducting periodic audits will ensure policies are updated and put into practice.














Goldman, J. (2006). Firewall Basics. In H. Bidgoli, Handbook of Information Security (pp. 2-14). Hoboken: John Wiley & Sons, Inc.

Hamdi, M., Doudriga, N., & Obaidat, M. (2006). Security Policy Guidelines. In H. Bidgoli, Handbook of Information Security (pp. 227-241). Hoboken: John Wiley & Sons, Inc.

Information Assurance Directorate. (2010). US Government Protection Profile for General-Purpose Operating Systems in a Networked Environment. Information Assurance Directorate. Retrieved from http://www.niap-ccevs.org/pp/pp_gpospp_v1.0.pdf

Kakareka, A. (2009). What is Vulnerability Assessment? In J. Vacca, Computer and Information Security Handbook (pp. 383-393). Boston: Morgan Kaufmann Inc.

Mallery, J. (2009). Building a Secure Organization. In J. Vacca, Computer and Information Security (pp. 3-21). Boston: Morgan Kaufmann Inc.

Microsoft . (2012). Windows Firewall. Retrieved from Microsoft Technet: http://technet.microsoft.com/en-us/network/bb545423.aspx

Smart Card Alliance. (2003). Using Smart Cards for Secure Physical Access. Princeton Junction: Smart Card Alliance. Retrieved from http://www.smartcardalliance.org/resources/lib/Physical_Access_Report.pdf

Stallings, W. (2011). Operating Systems Security. Handbook of Information Security, 154-163.

Sensei Enterprises, I. (Director). (2010). How do I secure my computer network? [Educational Video]. Retrieved from http://www.youtube.com/watch?v=g_xzh1rqkNs&feature=youtube_gdata_player

UMUC. (2011). Prevention and Protection Strategies in Cybersecurity. Adelphi, MD, USA.

Zeltzer, L. (2005). Auditing UNIX Systems: A Case Study. Retrieved from Lenny Zeltzer: http://zeltser.com/auditing-unix-systems/#prioritizing

, ,

Leave a comment