Sunday, October 30, 2011

Information Security is A Governance Issue

Proposals of any kind, security investments being no exception, demand the inclusion of certain fundamental components: Costbenefit analyses need to be performed, metrics need to be put in place, and value propositions need to be clarified to ensure that expenditures are necessary and that returns are both positive and measurable. IT managers must justify all investment proposals appropriately, using the tools in this book to guide them through the process, but that is only part of the equation. After the necessary data has been compiled, the decision to invest in greater levels of security moves beyond sheer financial modeling. The wider implications inherent in return-on-security investing require executives and members of the board to consider the softer and less quantifiable issues that are pertinent to every organization. For ultimately, security investing represents an active business decision that expresses an organization's tolerance for risk, cyber or otherwise.

In a speech at the National Cyber Security Summit in December 2003, Secretary Tom Ridge of the Department for Homeland Security acknowledged that in the first six months of 2003, more than 76,000 incidents had already occurred, many the result of hackers just going about their work. Expanding on that thought, he said, "A few lines of code could ultimately wreak as much havoc as a handful of bombs." The issues are very real; the challenge facing investment proposals is keeping alarmism out of the equation when determining a value for events not happening. Governments globally can create measures to protect both hard and soft structures, but much of the cyber-world rests in the private domain.

Secretary Ridge amplified this thought in his address when he stated, "Eighty-five percent of our nation's critical infrastructure, including the cyber-network that controls it, is owned and operated by the private sector. We need businesses . . . to lead the way." He went on to say, the "success of protecting our cyberspace depends on the investment and commitment of each . . . business."[1] Security governance requires organizations to take a broader perspective, understanding that the more successful a security investment is, the less visible and less measurable its results will be.


In this part, we will present a treatment process for dealing with a breach. This process is intended for larger companies.
It comprises the major steps below.
1 Gather information.
2 Determine extent and damage.
3 Establish and conduct investigation.
4 Determine mitigation (in parallel with Step 3).
5 Implement mitigation.
6 Follow up on investigation results.
7 Determine degree of resolution achieved.

Now let us look at these steps in detail.

Step 1 Gather information
This is the initial step. We assume that you have just been made aware of the fact that something might be wrong.
You will spend the next hours or days determining precisely what this 'something' is. For example, is it:
o a virus attack?
o a Distributed Denial of Service (DDOS) attack?
o a break-in?
o a lost or stolen piece of hardware (drives, laptops, PCs, servers, paper files)?
o anything else?

So, in the very first step you determine what actually happened. Next, you venture into somewhat legal territory to establish the following points.
o Which sites / jurisdictions are affected?
o Is it:
° fraud? ° theft?
° computer crime?
° computer or other espionage?
° anything else?

This is very important, as it will have an influence on later legal proceedings.
Furthermore, you need to establish the points below:
o Where did it happen?
o Where did it originate? This might be entirely different from the sites that are affected.
o Is it an external or internal threat or a case of collusion? If insiders come under suspicion, the investigation will look entirely different and will be more difficult.
o When did it happen? In computer crime it is not unlikely that you become aware of something that happened months previously, which would seriously complicate the investigation.
o Hints about why it happened.

This last point is, of course, the best starting point for actual remedial measures and to prevent the same type of breach from recurring.

Step 2 Determine extent and damage
Based on our risk categories, classify the breach in relation to:
o harm done to the company's reputation with the public and its customers
o purely financial damage
o legal damages, such as class-action lawsuits or exposure to criminal penalties (e.g. in cases of bribery).

Be careful to include damages propagating through business processes, and everything else that you reasonably can. Items should include, but not be limited to:
o direct damage to sites or hardware
o damage caused by a need for overtime
o damage through time lost
o damage from contractual penalties
o damage caused by a need for external consulting.

When establishing the magnitude of the breach, make sure to include the following departments of your company, so that they are able to prepare:
o PR, for preparing public and customer-oriented statements
o the Chief Financial Officer for excess budget clearances needed
o the Chief Executive Officers
o local managing directors as required
o senior staff of affected departments, such as IT and Marketing.

Step 3 Establish and conduct investigation
In the next step, you first need to determine whether the root cause of the breach is sufficiently known, whether you are exposed to some kind of human action, and whether the magnitude of the case justifies an internal or an external investigation. If you deem this to be true, you should assemble an investigational team, as described below. Based on the sensitivity of the case, it is advisable to use external consultants only, as they will be able to act unhindered by internal affairs and will vanish when the investigation concludes.

The investigational team should contain:
o a senior employee of the company, preferably the Chief Security Officer
o a leader, who should be an experienced senior professional
o as many technical investigators and operatives as needed; this will depend entirely on the extent of the case, whether it is national or international, and so on
o a quality assurance officer, who should also be an experienced senior investigation professional.
The leader of the investigation should report to the Chief Security Officer, and the team will typically meet:
o once per day in comparatively easy cases or in cases where external information gathering has anyway to be awaited
o twice per day if a single day is supposed to bring about significant progress or developments in the case.

Make sure that you have regular progress reports at least weekly.
Looking at the details of an investigation, the subjects can be so different, as the case studies show, that we can say only one thing: each investigation is totally unique. There are groups, based on types of incidents, but there is certainly no such thing as a 'standard' investigation.
You should try to gather all the facts in a central paper and/or electronic file. A well-organised file will contain:
o information on all relevant people (i.e. the targets of the investigation), including full and previous names, addresses, dates of birth and all collected background information
o information on all relevant places, such as homes, offices and meeting places
o a list of items of evidence collected and a description of why these are relevant as evidence
o a description and progress notes of the case
o a draft criminal complaint or a description of the case suitable for filing a lawsuit against the perpetrator.

Step 4 Determine mitigation
This step is conducted in parallel with Step 3, as you would not want to wait for an investigation to conclude before you start thinking about mitigation of the breach.
When determining mitigation you need to think along two lines:
1 What measures will limit the extent of the breach?
2 What measures will prevent the breach, or the entire class of breaches, from happening again?
You will find that immediate measures of containment are usually quite different from those established for a more permanent result. Also, a measure such as a network lock-down will naturally be lifted after some time. Think of 9/11 for example: a nationwide grounding of all air traffic was enforced. While this measure certainly prevents other potential attackers from getting into the air, it makes no sense in the long run, as you cannot prolong it indefinitely.

More permanent measures will include:
o establishing new policies or procedures
o implementing new technology, such as retina scanners, better man traps and two-factor authentication
o relocating a site to a less exposed area of a country or town.
When determining measures, make sure that the root cause(s) are known and well understood, as any measures derived from falsely assumed root causes will not provide any pre-emptive effect.

Be sure also to determine the cost of the measures, and their effectiveness in regard to the breach or class of breaches. Cost effectiveness by itself is not really that important, as the first consideration needs to be whether the measure is effective at all. Remember, though, that while a measure may seem at first sight not to be cost effective, it may still be so in preventing damage that is hard to measure, such as loss of reputation. In the end, it will come down to an executive decision on what protection, against which kind of damage, takes priority and is therefore allowed to incur substantial costs.

Step 5 Implement mitigation
Once you have successfully determined what to do, you should implement the measures without undue delay. You would not necessarily implement permanent measures 'immediately', as this depends on the complexity of the measures and the nature of the changes that they will bring along. If, for example, you choose to alter the organisational structure of your company as a result of a large-scale bribery scheme, you might encounter greater internal resistance than if you were to just set up another internal firewall.
The emphasis, therefore, is on 'without undue delay' as, if the breach is severe enough, your external auditors will certainly question you about the measures taken.

Implementing mitigation measures also includes all those company departments mentioned in Step 2 of this sample process. So, for example, PR could publish a press statement, the CEO might talk to the press, IT could implement its new firewall, the head of internal auditing might be fired for overlooking the root cause that led to the breach, a project on establishing a new policy and associated measures and training might be launched, and so on.

Step 6 Follow up on investigation results
It is quite obvious that not all investigations can be completed successfully. This means that you need to regularly follow up on the investigation's results, and to study any kind of final result carefully and act accordingly.
Following up also includes going to court or, in more general terms, putting the results of the investigation to best use for the company in the strategic and tactical context.
You should be aware that the legal steps following a breach are usually extremely time consuming. It is not unusual for legal proceedings to go on for as long as two years before a final verdict is reached, and it can take even longer in large corporate cases.

Step 7 Determine degree of resolution achieved
Once you have resolved the breach and implemented the measures that you have identified, it is of paramount importance to determine whether these measures have been effective. This is part of ISO27001's PDCA cycle and is generally just plain good practice. Any measure you have implemented cannot be deemed effective just because you have not suffered another breach for a while; that is definitely not enough. You need to look actively at how the measures have been implemented and to audit them in full.
You may even wish to bring in external auditors, so that you can gain an unbiased view. In general, the more complex the measures you have implemented, the more beneficial it will be to have an external review in addition to your internal one.
If you find that the measures have not been implemented as designed, then you must act, again without undue delay.

Enterprise Governance - Business Process Outsourcing


With the growing trend toward focusing on core business capabilities, many companies are outsourcing selected business functions to expert partners who can perform them more efficiently and cost-effectively. A step beyond traditional IT outsourcing, business process outsourcing includes such functions as cash collection, claims processing, invoicing, payroll and customer support. As recent research by Accenture shows, a significant number of firms now consider BPO a realistic option for reducing overhead costs (Table 9.4).

The decision to outsource administrative and support activities is being taken by forward-thinking managers who question how work has traditionally been carried out and whether there is a

better way of doing it. The availability of a new breed of third-party suppliers and complementary information technology (IT) makes outsourcing an increasingly attractive option for some. Many companies now outsource non-core and/or non-strategic activities - such as finance, human resources, legal and administrative processes - to third parties. These operate their businesses along shared services lines to provide services economically to several client organisations through sharing people and resources and by implementing common processes and systems (Table 9.5).


Case Study: Dairy Farm and Cap Gemini Ernst & Young6

Dairy Farm International Holdings is a leading retailer of fast-moving consumer goods in Asia Pacific, with more than $6 billion in annual revenues and 60,000 employees in ten territories. In the late 1990s as competition increased dramatically in the Hong Kong market, Dairy Farm embarked on a restructuring effort, which focused on strengthening core competencies, reducing operating costs while growing revenue, and avoiding capital outlays in non-core areas.

Dairy Farm teamed with Cap Gemini Ernst & Young (CGE&Y) Asia Pacific to build OneResource Group (ORG). ORG provides accounting, finance, and procure­ment services to companies globally. During the first two years of operation, ORG radically reshaped the finance function for Dairy Farm. Now, Dairy Farm Hong Kong only employs one finance person outside of ORG.

In the first two years of its joint venture with CGE&Y, Dairy Farm accomplished the following:

• consolidated to a single financial system across business units;

• reduced the finance and accounting staff by nearly 50 per cent overall reduction;

• achieved a 30 per cent decrease in costs;

• negotiated more than $3 million in savings in the procurement of operating supplies;

• implemented online tools for budgeting, management reporting, procure-to-pay, and T&E processing;

• established a low-cost processing operation in mainland China.

The goal of Dairy Farm's BPO project: build world-class capabilities in finance and procurement while avoiding the

The bottom-line: CFOs and their direct reports from across all types of organisations and industries are now examining finance and accounting outsourcing. They are looking for ways to improve various transaction-intensive areas of their operation such as auditing, reporting, accounting, receivables and payables.




Reproduced by permission of Cap Gemini Ernst & Young.

Accenture, in association with the Economist Intelligence Unit, conducted a wide-ranging survey of global corporate leaders in early 2003. Based on this research, they identified a number of key patterns with respect to finance outsourcing.7

Around 71 per cent of survey respondents, expect finance outsourcing to become more prevalent over the next three years; 30 per cent are currently outsourcing finance and accounting functions, and a majority ofthese think the arrangement has been very successful (8 per cent) or successful (57 per cent).

Companies with metrics in place to measure their gains report significant savings from outsourcing finance and accounting functions. Some 66 per cent of survey respondents saw 'lower costs' as the primary benefit of outsourcing. Rhodia, the French speciality chemicals company, reduced spending by 30 per cent in two years.

Reduced costs are not the only - or always the most significant - benefit. Outsourcing enables companies to focus on their core competencies. It relieves finance managers of responsibility for repetitive or generic business tasks, allowing them to concentrate on high-level management and other value-added activities. And by enabling companies to review and reshape entire business processes with an outsider's discipline, it can help companies execute ambitious transformation plans.

Outsourcing is often perceived as a risky undertaking. Many executives worry that it means surrendering control over vital business functions, and survey respondents cited numerous potential pitfalls, including valuable data falling into competitors' hands (52 per cent), the costs ofoutsourcing exceeding expecta­tions (48 per cent) and the erosion ofin-house knowledge (45 per cent).

Executives are keener to outsource repetitive, generic finance processes than operations requiring higher level analytical thinking. Payroll is a common starting point; this was the activity to have been outsourced by the greatest share (27 per cent) ofsurvey respondents. Niche and specialist areas, such as tax planning and compliance, are natural outsourcing targets, as are turnkey solutions that help companies enter unfamiliar or difficult markets. Budgeting and forecasting were activities respondents deemed least suited to outsourcing.

Nearly 75 per cent ofsurvey respondents thought that finance outsourcing could improve the quality of a company's disclosure. Outsourcing can create a healthy separation between managers trying to achieve performance and accountants charged with measuring it, reducing the temptation to massage the figures.

There are a number ofspecific reasons why firms may choose to outsource some or all of their shared services activities. These include:

• cost reduction;

• poor performance;

• capabilities not core to strategy;

• better, cheaper, effective alternatives exist;

• insufficient expertise available to upgrade;

• potential loss of control not an issue;

• service no longer relevant;

• previous experience with successful outsourcing;

• too disruptive to make the changes internally.

The conclusion many managers reach when they realise they need to trim overheads and eliminate inefficient internal service units, is to outsource it. They see moving the problem out ofthe organisation as the most prudent and easiest course of action to end inter-departmental disputes, poor service and 'unreasonable' costs. Even after more than a decade of restructuring, corpora­tions are still pursuing the goals of efficiency and 'right-sizing'. The decision to outsource can seem enticingly easy: just let someone else do it. Implementation can be complex and always impacts people and strategy. But in many cases, it may be the wisest alternative.



Case Study: Rhodia and Accenture8

Rhodia, a $7 billion maker of specialty chemicals head­quartered in France, conducted a benchmark study and found that its support processes were falling into 'worse than average' category. To improve their support processes, Rhodia turned to finance & accounting (F&A) BPO to achieve improved performance and cost reductions.

In 2001, the company entered into a six-year contract with Accenture to transfer the bulk of its financial and accounting functions to a shared service centre in Prague. Why Prague? Rhodia decided that moving to a Central European location where salaries and operational expenses are about three-quarters less than in Western Europe was a sound business decision. Rhodia laid off about 200 local employees and replaced them with Accenture's staff in Prague.

Transitioning to the Prague shared services centre required a phased approach, starting with all the UK units and following with several waves (30-50 people at a time) from the French locations. By the December 2002 target date, almost 90 per cent of the transition was completed. The lower cost of living and salaries in Prague is estimated to have yielded several millions in annual savings.



In the context of existing shared services, outsourcing can be viewed as the 'third' phase. Having decided to create internal shared services (phase 1) and followed through by implementing best practices (phase 2), some shared services operators realise that they will never be able to reach the standards of world class operations in certain of their activities. Outsourcing parts of shared services operations becomes a viable alternative (phase 3).

Saturday, October 29, 2011

What is IT Governance and Who Should be Involved?

There is no definitive definition of IT Governance. Notable definitions that describe IT Governance include one provided by ISACA, 2000. They define IT Governance as a structure of relationships and processes to direct and control the enterprise in order to achieve the enterprise's goals by adding value while balancing risks versus return over IT and its processes.


IT Governance is also described as how those persons entrusted with governance of an entity consider IT in the supervising, monitoring, controlling, and direction of the entity. How IT is applied will have an immense impact on whether the entity will attain its vision, mission, or strategic goals (IT Governance Institute [IGI], 2001).


Implementing IT Governance is not always easy, nor is IT Governance readily embraced. Williams (2002) suggests that there is no clear governance in many organizations. Therefore, it is considered impossible to establish IT Governance. In the absence of a traditional IT Governance organizational structure, ideas differ when determining who should govern the corporate IT investment. A study from the Scottsdale Informatics Institute (2001) suggests that IT Governance has been lacking because IT has been seen as an operations matter best left to management. Board members often lacked interest or expertise in technology issues.

Contrary to this finding, some believe that IT Governance is the responsibility of the board of directors and, at the very least, executive management (Canadian Institute of Chartered Accountants [CICA], 2002; Board Briefing on IT Governance, 2001; Scottsdale Infomatics Institute, 2001).

Grembergen (n.d.) concurs. He recommends that senior managers should be questioning how they can get their Chief Information Officers (CIOs) and IT organization to return business value or ensure that they do not misappropriate capital funding or make bad investments. Furthermore, senior managers should provide mechanisms like an IT Balanced Scorecard to measure and manage the functions of the IT organization like IT-business alignment and the IT strategy development process.

Williams (2001) contends that boards and executives must become more concerned over IT investments because they ultimately affect shareholder value. He suggests a key question is whether the organization's IT investments are aligned with its strategic objectives, thus building the capability necessary to deliver long-term sustainable business value. IT Governance is a necessity, not a luxury. Organizations must bring IT Governance to the board level because it is not smart business to wait for an IT-related disaster before taking action.

The literature is very explicit in describing the roles and responsibilities of those who govern corporate IT investments, the board of directors or at the very least, the executive management team. The CICA (2002) in a recent report concludes that boards should review the strategic planning process, the approved plan, and performance against the plan. In so doing, they can ensure that the strategic planning process is structured so that IT investments are aligned with business objectives, and that the tactical plan to support the strategy is being executed.

The board must make certain there are policies and processes to ensure the integrity of internal controls and the information systems that are used for management. This function can be fulfilled by assigning the responsibility for the organization's use of and investment in IT to a board member, or IT subcommittee, and ensuring that the individual responsible for IT Governance holds a senior position. The board should be confident that management establishes programs to keep employees knowledgeable and compliant with information and security policies.

Finally, boards must exercise policies and processes that identify the business risks associated with IT, determine the level of risks the organization is willing to accept, and ensure that the risks are being monitored. In managing the risks of IT, boards will exercise due diligence of the IT systems and actively mitigate the risks of IT to the organization.

The IGI (2001) also suggests IT Governance is the responsibility of the board of directors and executive management. It is an integral part of enterprise governance and consists of the leadership, organizational structures, and processes that ensure IT sustains and extends the organization's strategies and objectives.
Boards and executives must provide the leadership, structures and processes that ensure that the organization's IT sustains and extends its strategies and objectives. They should keep themselves informed of the role and impact of IT on the enterprise, assign IT responsibilities, define constraints within which IT professionals operate, measure IT performance, manage IT risks, and ensure the compliance of IT Governance standards (Williams, 2001).

Windows 7 MinWin Compared with Vista and Windows Server 2008 MinWin

MinWin is a new term that was first heard during the de-velopment of Microsoft Windows Vista in 2003. This term is used by Microsoft to informally refer kernels and other operating system components that form the basis for different Windows releases. Now, Windows 7 also has its own MinWin. But how does the Windows 7's MinWin differ from the MinWins of Windows Vista and Windows Server 2008?

The Windows 7 MinWin has come into recognition in Oc-tober 2007 when one of the software developers in Microsoft, Eric Traut, talked about it in a demonstration. During that time, he said that the Windows 7 MinWin will be made up of more or less 100 files. Here, the basic HTTP servers will be running. The MinWin of Windows 7 will require up to 25 MB disk space and will work using 40 MB system memory. Then, the Windows MinWin has no graphical user interface and use only a command line interface.

Meanwhile, the Windows Vista MinWin was described as consisting about 95 percent of the total code base of the operating system. Additional codes on Vista were layered on top of the Vista MinWin. Although the term was first used in Microsoft Windows Vista, its software distributor did not market the operating system using the term.

The Windows Server 2008, on the other hand, also has its MinWin. This is often referred to as the modified and advanced Microsoft Windows Vista MinWin. But the Windows Server 2008 MinWin has changed to being only a small and self-contained OS that is not dependent on higher-level parts.

Many say that the Windows 7 MinWin is also an advanced MinWin from Vista. Microsoft Company denied it and told that the new OS has a new MinWin.

Friday, October 28, 2011

Windows 7 Ultimate Build: Is it for Commercialization?

There are approximately two great things with Windows, and this is a far-cry from the multitude of other great things about it.

It is geared for commercialization. Everything that's made for the benefit of the consumers is always good for consumers. Always remember that this is what the company wants. Of course, profits and business health play a part too, but where does these two come from, but from consumers, right? Being geared for commercial sale and use, it is easy to tinker with as well as the repair, in case something breaks up loose.

There's always an upgrade and support to be expected. That's the next best thing to consider, and while it also means profits for the company, it's a position of give and take.
Consider this; a give and take is better than a buy-and-be-alone world, right?

And how does this concern a normal person with normal computer needs? For example, Windows just released the Windows 7 Ultimate Build, and from the manufacturer itself, people would already connect with a good company with good products. Even the critics of the program are saying that it is really a product made for the masses and consumers.
This, as well as being an upgraded version of past programs, makes it better suited than other Windows programs in handling the needs of their users.

The current comment with Windows 7 Ultimate Build is that it is best used on individual computers, ones that are found

at home, because of its improvements. Another worthy im-provement that must be mentioned is that it is easier to install than past Windows programs. Now that's another reason to get that program, right?

Mobile Phone Applications: Which Mobile Phone Application are You?

Not literally, of course, but because of the many applications sprouting up here and there, it's easy to lose track of what exactly you're running on your mobile phone.

You have to understand that there is no universal platform for mobile phone applications. Depending on brand (especially with Apple-we'll get to that later) and / or the model of your mobile phone, most applications run on the following platforms:


Who DOESN'T know the iPhone? Apple has taken its raging success in the computer business to the mobile phone industry, with the iPhone reaching ridiculous success around the world. If you have one, Apple has loads of applications available developed by them and enthusiastic users available at their website. Be warned though, that Apple works only on Apple.

Symbian and Java ME

If you're kicking it old school and have one of those older (or simpler) phones, chances are, either of these two are the platforms that you have. We put them together because most regular phones have these, and Java apps are compatible with Symbian platforms. It doesn't work vice versa though.


Just like the dudes at Apple, Microsoft apps can't, won't and don't cross over to anything else. Many smartphones (think

the full QWERTY keypad) have Microsoft as their base platform for applications.


Mainly done in response to Apple's platform, Android is probably the youngest platform in the mobile app market today. Certain brands are now only beginning to incorporate it into their mobile devices; it's up to you to decide if having this makes you is terribly advanced or terribly out of place.

Information Security Governance - Lowering Risk Exposure

Equipment is available to address most risks, and while new and reinvented threats will inevitably continue to abound, implementing fundamental security appliances and practices throughout the system can aid organizations in their attempts to mitigate such threats.

Additional measures can also be taken to reduce the risk of attack and to ensure that hackers cannot clandestinely use an organization's network.

Tech Tip: Being a Good Netizen

Organizations that connect to the Internet can be unwittingly serving as participants in denial of service (DoS) attacks. Simple measures, such as filtering addresses, can help to reduce risk. These were discussed in Request for Comments (RFC) 1918 and RFC 2827. RFCs are a series of documents that describe Internet protocols and related equipment. They can be found at

In brief, filtering ensures that perimeter equipment does not allow private addresses to cross the Internet boundary. Private addresses, such as and others, are only meant to be used on internal networks and should never find their way to the Internet. Correspondingly, they should never originate from the Internet either.

RFC 2827 is best summarized as follows: A perimeter router or firewall must ensure that packets arriving from the Internet, and having originated from outside the network, do not bear a source address from their own internal network. Filtering consistent to this policy should be implemented.

Table below and provides an appropriate type of technology that can effectively mitigate a particular type of attack. It lists the equipment in which the mitigating technology is primarily found, along with a fair substituteother equipment that can be effectively used to also provide similar mitigation, albeit invariably to a lesser degree.

Table 4-2. Mitigation Technologies

ThreatMitigation TechnologyRecommended EquipmentFair Substitute
War-dialersStrong authenticationAccess Control Servers
DoSIDSNIDS sensorRouter and firewall running IDS, routers with NBAR
Rate limitationProvided on ISP equipment
Unauthorized accessStateful firewallingFirewallsRouters with firewall feature set
Intrusion detectionHIDS
Man-in-the-middle attackEncryptionVPN concentratorRouters and firewall with VPN (IPSec) capabilities
Network reconnaissanceIDSNIDS sensorRouters with IDS
FilteringRouters with access control list
EncryptionIPSecOnly IPSec traffic would be allowed inbound to corporate network.
HardeningAutomated patch-management systems
Password attackStrong authentication OTPAccess control serversLocal authentication with strong passwords
IP spoofingStateful firewallingFirewallRouters running firewall feature set
RFC 2827 filtering

RFC 1918 filtering
Packet sniffersSwitched infrastructureSwitches
Intrusion detectionNIDS sensor

EncryptionVPN concentrator
Trust exploitationPrivate VLANSwitches
WormContent filtering
Virus scanning
Intrusion detection
VirusContent filtering
Virus scanning
Intrusion detection
Trojan horsesContent filtering
Virus scanning
Intrusion detection
Application layer attackUp-to-date patchingAutomated patch-management system
Intrusion detectionHIDS
Port redirectionIntrusion DetectionHIDS


Thursday, October 27, 2011

Protect Your Data Before IT Gone

Which is more expensive? A computer or data inside it? In my opinion, the most expensive is the data. If the computer is damaged, we can still buy a new computer. If the data of our work gone, which was years effort we made has been lost, what can make it back again? it can possibly get back, but not as easy and as cheap to buy a new computer. And also not necessarily the data will return intact.

That's not to underestimate the data storage medium of our work. Be helpful to have another planning before data we actually damaged or lost. The first thing we must recognize are what are the cause of damage or loss of data, namely:
1. Virus attack
2. System (OS) Crash
3. Hard disk Crash / damaged
4. Accidentally deleted

1. Virus attack

For the first cause is a virus attack, the file can be lost or damaged. File2 is nothing to return or not depends on a virus that attacked ferociously. There are viruses that only our file2 hide or change the file extension into other files. For this, we can find again. But there are malignant virus that can delete files in your computer permanently. Well, for this one, we can cry for a week :-)

2. System (OS) Crash

The second cause of the OS crashes. If we are using windows and had crashes, the first way is to make recovery from the windows CD. But if the recovery fails, then the other way is to reformat the hard drive and install a new OS. What about our data? Surely, if the format helped drive lost data would be formatted.

3. Hard disk Crash / damaged

The third cause lost of data is Hard disk crash. This cause by hardware problems, especially electronics. We can not guarantee an electronic device can run forever. When the disk has started out suspicious sound, that's when the hard disk should be retired. We truly hard drive crash / corrupted, the data in it will go away automatically. Indeed there is a way to take back the data stored. Just stay to take special shop repair disk. They specialize in saving data, but does not guarantee the data also will be back in full and of course the price could be very expensive.

4. Accidentally deleted

The fourth is the cause of human error. Data accidentally erased. If only erased recycle bin and go, you live restore again. But if removed permanently, it seems difficult to go back. Similarly, the second cause, there are some techniques to get the data back but not necessarily with high paying cost of recovery.

For that we need to prepare themselves well before all this happened. You would not want his work over the years lost in an instant. For example for writers whose work has a lot of sediment on the hard disk contemplation of lost years caused by a virus. To that required some effective preventive measures, namely:

1. Make a different disk partition for the System (OS) and the data.

Suppose you have a 80GB hard drive, make it two partitions for Windows (usually drive C, 30GB) and for data (usually Drive D, 50GB). The virus usually will attack the system. So when the system crashes due to virus attack, we can easily do a full format and installation of new systems. Data stored on another partition will not have to formatted. Causes damage to the first and second could be handled well. Even My Document also can we move it to D. I will explain in the next post. Just wait yes ...

2. Get used to backup data regularly to another storage medium.

For archives and important data, it helps us also make backups to CD / DVD or to another storage such as external hard drive. So when something happens to our computers, important data will not be lost because it was stored in a safe place.

3. Be careful and note that the data will be removed when doing disk cleanup.

Let us not regret it later because the deleted data is data that is still needed.

4. Complete computer protection from virus attacks, Trojans, Spyware and others.

Moreover, if the Internet connected computer. This can be done by using software-software that can be downloaded for free on the Internet, for example AVG, Spybot, etc..

Better to prevent than cure. If the behavior in using our computers correctly, guaranteed you will not regret in the future because of loss of important data. Good luck.

What you should know for The Information System Security Management Professional (ISSMP) Certification

As stated in the ISSMP Study Guide, the ISSMP candidate is expected to understand the following key knowledge areas in this domain:


·         Building security into the Systems Development Life Cycle (SDLC)


·         Integrating Application and Network Security Controls


·         Integrating Security with the Configuration Management Program


·         Developing and Integrating Processes to Identify System Vulnerabilities and Threats


Building Security into the Systems Development Life Cycle (SDLC)

One component of implementing information systems security in the SDLC is the application of the Defense in Depth strategy. The Defense in Depth strategy is built upon three critical elements - people, technology, and operations - and comprises the following information assurance principles:

·         Defense in multiple places - Deployment of information protection mechanisms at multiple locations to protect against internal and external threats.


·         Layered defenses - Deployment of multiple information protection and detection mechanisms so that an adversary or threat will have to negotiate multiple barriers to gain access to critical information.


·         Security robustness - Based on the value of the information system component to be protected and the anticipated threats, estimation of the robustness of each information assurance component in terms of assurance and strength of the information assurance component.


·         Deploy KMI/PKI - Deployment of robust key management infrastructures (KMI) and public-key infrastructures (PKI).


·         Deploy intrusion detection systems - Deployment of intrusion detection mechanisms to detect intrusions, evaluate information, examine results, and, if necessary, to take action.


The System Development Life Cycle Phases

NIST SP 800-14 defines the system life cycle phases as follows:


·         Initiation - The need for the system and its purpose are documented. A sensitivity assessment is conducted as part of this phase. A sensitivity assessment evaluates the sensitivity of the IT system and the information to be processed.


·         Development/Acquisition - Comprises the system acquisition and development cycles. In this phase, the system is designed, developed, programmed, and acquired.


·         Implementation - Installation, testing, security testing, and accreditation are conducted.


·         Operation/Maintenance - The system performs its designed functions. This phase includes security operations, modification/addition of hardware and/or software, administration, operational assurance, monitoring, and audits.


·         Disposal - Disposition of system components and products, such as hardware, software, and information; disk sanitization; archiving files; and moving equipment.


Information System Security Applied to the SDLC

The following list summarizes the information system security steps to be applied to the SDLC as described in SP 800-64.


·         An organization will use the general SDLC described in this document or will have developed a tailored SDLC that meets its specific needs. In either case, NIST recommends that organizations incorporate the associated IT security steps of this general SDLC into their development process:


·         Initiation Phase:


o        Security Categorization - Defines three levels (low, moderate, or high) of potential impact on organizations or individuals should there be a breach of security (a loss of confidentiality, integrity, or availability). Security categorization standards assist organizations in making the appropriate selection of security controls for their information systems.


o        Preliminary Risk Assessment - Results in an initial description of the basic security needs of the system. A preliminary risk assessment should define the threat environment in which the system will operate.


·         Acquisition/Development Phase:


o        Risk Assessment - Analysis that identifies the protection requirements for the system through a formal risk assessment process. This analysis builds on the initial risk assessment performed during the Initiation phase but will be more in-depth and specific.


o        Security Functional Requirements Analysis - Analysis of requirements that may include the following components: (1) system security environment (that is, enterprise information security policy and enterprise security architecture) and (2) security functional requirements.


o        Assurance Requirements Analysis Security - Analysis of requirements that address the developmental activities required and assurance evidence needed to produce the desired level of confidence that the information security will work correctly and effectively. The analysis, based on legal and functional security requirements, will be used as the basis for determining how much and what kinds of assurance are required.


o        Cost Considerations and Reporting - Determines how much of the development cost can be attributed to information security over the life cycle of the system. These costs include hardware, software, personnel, and training.


o        Security Planning - Ensures that agreed-upon security controls, planned or in place, are fully documented. The security plan also provides a complete characterization or description of the information system as well as attachments or references to key documents supporting the agency’s information security program (e.g., configuration management plan, contingency plan, incident response plan, security awareness and training plan, rules of behavior, risk assessment, security test and evaluation results, system interconnection agreements, security authorizations/accreditations, and plan of action and milestones).


o        Security Control Development - Ensures that security controls described in the respective security plans are designed, developed, and implemented. For information systems currently in operation, the security plans for those systems may call for the development of additional security controls to supplement the controls already in place or the modification of selected controls that are deemed to be less than effective.


o        Developmental Security Test and Evaluation - Ensures that security controls developed for a new information system are working properly and are effective. Some types of security controls (primarily those controls of a nontechnical nature) cannot be tested and evaluated until the information system is deployed - these controls are typically management and operational controls.


o        Other Planning Components - Ensures that all necessary components of the development process are considered when incorporating security into the life cycle. These components include selection of the appropriate contract type, participation by all necessary functional groups within an organization, participation by the certifier and accreditor, and development and execution of necessary contracting plans and processes.


·         Implementation Phase:


o        Inspection and Acceptance - Ensures that the organization validates and verifies that the functionality described in the specification is included in the deliverables.


o        Security Control Integration - Ensures that security controls are integrated at the operational site where the information system is to be deployed for operation. Security control settings and switches are enabled in accordance with vendor instructions and available security implementation guidance.


o        Security Certification - Ensures that the controls are effectively implemented through established verification techniques and procedures and gives organization officials confidence that the appropriate safeguards and countermeasures are in place to protect the organization’s information system. Security certification also uncovers and describes the known vulnerabilities in the information system.


o        Security Accreditation - Provides the necessary security authorization of an information system to process, store, or transmit information that is required. This authorization is granted by a senior organization official and is based on the verified effectiveness of security controls to some agreed-upon level of assurance and an identified residual risk to agency assets or operations.


·         Operations/Maintenance Phase:


o        Configuration Management and Control - Ensures adequate consideration of the potential security impacts resulting from specific changes to an information system or its surrounding environment. Configuration management and configuration control procedures are critical to establishing an initial baseline of hardware, software, and firmware components for the information system and subsequently controlling and maintaining an accurate inventory of any changes to the system.


o        Continuous Monitoring - Ensures that controls continue to be effective in their application through periodic testing and evaluation. Security control monitoring (that is, verifying the continued effectiveness of those controls over time) and reporting the security status of the information system to appropriate agency officials is an essential activity of a comprehensive information security program.


·         Disposition Phase:


o        Information Preservation - Ensures that information is retained, as necessary, to conform to current legal requirements and to accommodate future technology changes that may render the retrieval method obsolete.


o        Media Sanitization - Ensures that data is deleted, erased, and written over as necessary.


o        Hardware and Software Disposal - Ensures that hardware and software is disposed of as directed by the information system security officer.


Integrating Application and Network Security Controls

Application and network security controls can be presented in the context of systems engineering (SE) and the corresponding information systems security engineering (ISSE) controls.


Systems Engineering

Systems engineering can be defined as the branch of engineering concerned with the development of large and complex systems, where a system is understood to be an assembly or combination of interrelated elements or parts working together toward a common objective. The Systems Engineering process consists of the following elements:


·         Discover needs


·         Define system requirements


·         Design system architecture


·         Develop detailed design.


·         Implement system


·         Assess effectiveness.


The Information Systems Security Engineering Process

The ISSE process mirrors the generic SE process of IATF document 3.1. The ISSE process elements and the associated SE process elements, respectively, are:


·         Discover Information Protection Needs - Discover Needs


·         Define System Security Requirements - Define System Requirements


·         Design System Security Architecture - Design System Architecture


·         Develop Detailed Security Design - Develop Detailed Design


·         Implement System Security - Implement System


·         Assess Information Protection Effectiveness - Assess Effectiveness


Each of the six ISSE process activities will be reviewed in the following sections.


Discover Information Protection Needs

The information systems security engineer can obtain a portion of the information required for this activity from the SE process. The objectives of this activity are to understand and document the customer’s needs and to develop solutions that will meet these needs.

The information systems security engineer should use any reliable sources of information to learn about the customer’s mission and business operations, including areas such as human resources, finance, command and control, engineering, logistics, and research and development. This knowledge can be used to generate a concept of operations (CONOPS) document or a mission needs statement (MNS). Then, with this information in hand, an information management model (IMM) should be developed that ultimately defines a number of information domains. Information management is defined as:

·         Creating information


·         Acquiring information


·         Processing information


·         Storing and retrieving information


·         Transferring information


·         Deleting information


In the Discover Information Protection Needs activity of the ISSE process, the information systems security engineer must document all elements of the activity. These elements include:

·         Roles


·         Responsibilities


·         Threats


·         Strengths


·         Security services


·         Priorities


·         Design constraints


These items form the basis of an Information Protection Policy (IPP), which in turn becomes a component of the customer’s Information Management Policy (IMP


Define System Security Requirements

In this ISSE activity, the information systems security engineer identifies one or more solution sets that can satisfy the information protection needs of the IPP.

In selecting a solution set, the information systems security engineer must also consider the needs of external systems such as Public Key Infrastructure (PKI) or other cryptographic-related systems.

A solution set consists of a preliminary security CONOPS, the system context, and the system requirements. In close cooperation with the customer and based on the IPP, the information systems security engineer selects the best solution among the solution sets. The information protection functions and the information management functions are delineated in the preliminary security CONOPS, and the dependencies among the organization’s mission and the services provided by other entities are identified. In developing the system context, the information systems security engineer uses systems engineering techniques to identify the boundaries of the system to be protected and allocates security functions to this system as well as to external systems. The information systems security engineer accomplishes this allocation by analyzing the flow of data among the system to be protected and the external systems and by using the information compiled in the IPP and IMM.

The third component of the solution set - the system security requirements - is generated by the information systems security engineer in collaboration with the systems engineers. Requirements should be unambiguous, comprehensive, and concise, and they should be obtained through the process of requirements analysis. The functional requirements and constraints on the design of the information security components include regulations, the operating environment, targeting internal as well as external threats, and customer needs.

At the end of this process, the information systems security engineer reviews the security CONOPS, the security context, and the system security requirements with the customer to ensure that they meet the needs of the customer and are accepted by the customer. As with all activities in the ISSE process, documentation is very important and should be generated in accordance with the CERTIFICATION AND ACCREDITATION requirements.


Design System Security Architecture

The requirements generated in the Define System Security Requirements activity of the ISSE process are necessarily stated in functional terms - indicating what is needed but not how to accomplish what is needed. In Design System Security Architecture, the information systems security engineer performs a functional decomposition of the requirements that can be used to select the components required to implement the designated functions. Some aids that are used to implement the functional decomposition are timeline analyses, flow block diagrams, and a requirements allocation sheet. The result of the functional decomposition is the functional architecture of the information security systems. In the decomposition process, the performance requirements at the higher level are mapped onto the lower-level functions to ensure that the resulting system performs as required. Also as part of this activity, the information systems security engineer determines, at a functional level, the security services that should be assigned to the system to be protected as well as to external systems. Such services include encryption, key management, and digital signatures. Because implementations are not specified in this activity, a complete risk analysis is not possible. General risk analysis, however, can be done by estimating the vulnerabilities in the classes of components that are likely to be used.

As always, documentation in accordance with requirements of the CERTIFICATION AND ACCREDITATION process should be performed.


Develop Detailed Security Design

The information protection design is achieved through continuous assessments of risks and the comparison of these risks with the information system security requirements by the ISSE personnel. The design activity is iterative, and it involves both the SE and ISSE professionals. The design documentation should meet the requirements of the CERTIFICATION AND ACCREDITATION process. It should be noted that this activity specifies the system and components but does not specify products or vendors.


The tasks performed by the information systems security engineer include:

·         Mapping security mechanisms to system security design elements


·         Cataloging candidate commercial off-the-shelf (COTS) products


·         Cataloging candidate government off-the-shelf (GOTS) products


·         Cataloging custom security products


·         Qualifying external and internal element and system interfaces


·         Developing specifications such as Common Criteria protection profiles


Implement System Security

This activity moves the system from the design phase to the operational phase.

The Implement System Security activity concludes with a system effectiveness assessment that produces evidence that the system meets the requirements and needs of the mission. Security accreditation usually follows this assessment.

The assessment is accomplished through the following actions of the information systems security engineer:

·         Verifying that the implemented system does address and protect against the threats itemized in the original threat assessment


·         Providing inputs to the CERTIFICATION AND ACCREDITATION process


·         Application of information protection assurance mechanisms related to system implementation and testing


·         Providing inputs to and reviewing the evolving system life cycle support plans


·         Providing inputs to and reviewing the operational procedures


·         Providing inputs to and reviewing the maintenance training materials


·         Taking part in multidisciplinary examinations of all system issues and concerns


An important part of the Implement System Security activity is the determination of the specific components of the information system security solution. Some of the factors that have to be considered in selecting the components include:

·         Availability now and in the future


·         Cost


·         Form factor


·         Reliability


·         Risk to system caused by substandard performance


·         Conformance to design specifications


·         Compatibility with existing components


·         Meeting or exceeding evaluation criteria (typical evaluation criteria include the Commercial COMSEC Evaluation Program [CCEP], National Information Assurance Partnership [NIAP], Federal Information Processing Standards [FIPS], NSA criteria, and NIST criteria)


Assess Information Protection Effectiveness

In order to assess the effectiveness of the information protection mechanisms and services effectively, this activity must be conducted as part of all the activities of the complete ISSE and SE process.


Summary Showing the Correspondence of the SE and ISSE Activities

As discussed in the descriptions of the SE and ISSE processes, there is a one-to-one correspondence of activities in the ISSE process to those in the SE process. Table, taken from IATF document, Release 3.1, September 2002, summarizes those activities in the ISSE process that correspond to activities in the SE process.


Table: Corresponding SE and ISSE Activities
Open table as spreadsheet

Discover Needs Discover Information Protection Needs 
The systems engineer helps the customer understand and document the information management needs that support the business or mission. Statements about information needs may be captured in an information management model (IMM). The information systems security engineer helps the customer understand the information protection needs that support the mission or business. Statements about information protection needs may be captured in an Information Protection Policy (IPP). 
Define System Requirements Define System Security Requirements 
The systems engineer allocates identified needs to systems. A system context is developed to identify the system environment and to show the allocation of system functions to that environment. A preliminary system concept of operations (CONOPS) is written to describe operational aspects of the candidate system (or systems). Baseline requirements are established. The information systems security engineer allocates information protection needs to systems. A system security context, a preliminary system security CONOPS, and baseline security requirements are developed. 
Design System Architecture Design System Security Architecture 
The systems engineer performs functional analysis and allocation by analyzing candidate architectures, allocating requirements, and selecting mechanisms. The systems engineer identifies components, or elements, allocates functions to those elements, and describes the relationships between the elements. The information systems security engineer works with the systems engineer in the areas of functional analysis and allocation by analyzing candidate architectures, allocating security services, and selecting security mechanisms. The information systems security engineer identifies components, or elements, allocates security functions to those elements, and describes the relationships between the elements. 
Develop Detailed Design Develop Detailed Security Design 
The systems engineer analyzes design constraints, analyzes trade-offs, does detailed system design, and considers life cycle support. The systems engineer traces all the system requirements to the elements until all are addressed. The final detailed design results in component and interface specifications that provide sufficient information for acquisition when the system is implemented. The information systems security engineer analyzes design constraints, analyzes trade-offs, does detailed system and security design, and considers life cycle support. The information systems security engineer traces all the system security requirements to the elements until all are addressed. The final detailed security design results in component and interface specifications that provide sufficient information for acquisition when the system is implemented. 
Implement System Implement System Security 
The systems engineer moves the system from specifications to the tangible. The main activities are acquisition, integration, configuration, testing, documentation, and training. Components are tested and evaluated to ensure that they meet the specifications. After successful testing, the individual components - hardware, software, and firmware - are integrated, properly configured, and tested as a system. The information systems security engineer participates in a multidisciplinary examination of all system issues and provides inputs to CERTIFICATION AND ACCREDITATION process activities, such as verification that the system as implemented protects against the threats identified in the original threat assessment; tracking of information protection assurance mechanisms related to system implementation and testing practices; and providing inputs to system life cycle support plans, operational procedures, and maintenance training materials. 
Assess Effectiveness Assess Information Protection Effectiveness 
The results of each activity are evaluated to ensure that the system will meet the users’ needs by performing the required functions to the required quality standard in the intended environment. The systems engineer examines how well the system meets the needs of the mission. The information systems security engineer focuses on the effectiveness of the information protection - whether the system can provide the confidentiality, integrity, availability, authentication, and nonrepudiation for the information it is processing that is required for mission success. 


The ISSE process provides input to the CERTIFICATION AND ACCREDITATION process in the form of evidence and documentation. Thus, the information systems security engineer has to consider the requirements of the accrediting authority. The Certification and Accreditation Process certifies that the information system meets the defined system security requirements and the system assurance requirements. It is not a design process. The SE/ISSE process also benefits by receiving information back from the CERTIFICATION AND ACCREDITATION process that may result in modifications to the SE/ISSE process activities.

In summary, the outputs of the SE/ISSE process are the implementation of the system and the corresponding system documentation. The outputs of the CERTIFICATION AND ACCREDITATION process are Certification documentation, Certification recommendations, and an Accreditation decision.

Tuesday, October 25, 2011

How to detect Vulnerability of Corporate Network

Probing is an active variation of eavesdropping, usually used to give an attacker a road map of the network in preparation for an intrusion or a DoS attack. Attackers use it to discover what ports are open, what services are running, and what system software is being used. Probing enables an attacker to more easily detect and exploit known vulnerabilities within a target machine.

Scanning, or traffic analysis, uses a “sniffer” to scan the hosts of various enabled services to document what systems are active on a network and what ports are open.

Both of these can be performed either manually or automatically. Manual vulnerability checks are performed using tools such as Telnet to connect to a remote service to see what is listening. Automated vulnerability scanners are software programs that automatically perform all the probing and scanning steps and report the findings back to the user. As a result of the free availability of such software on the Internet, the amount of this type of automated probing has increased.

Vulnerability Scanning

Vulnerability scanning should be implemented by the security professional to help identify weaknesses in a system. It should be conducted on a regular periodic basis to identify compromised or vulnerable systems. The scans directed at a target system can either be internal, originating from within the system, or external, originating from outside the target system.


Because scanning activity is often a prelude to a system attack, monitoring and analysis of the logs and blocking of unused and exposed ports should accompany the detection of malicious scans.

Conducting scans inside the enterprise on a regular basis is one way to identify and track several types of potential problems, such as unused ports that respond to network requests. Also, uncontrolled or unauthorized software may be located using these scanning techniques.

A common vulnerability-scanning methodology may employ several steps, including an IP device discovery scan, workstation vulnerability scan, and server vulnerability scan.

Discovery Scanning

The intent of a discovery scan is to collect enough information about each network device to identify what type of device it is (e.g., workstation, server, router, firewall), its operating system, and whether it is running any externally vulnerable services such as Web services, FTP, or e-mail. The discovery scan contains two elements: inventory and classification. The inventory scan provides information about the target system’s operating system and its available ports. The classification process identifies applications running on the target system, which aids in determining the device’s function.


Workstation Scanning

A full workstation vulnerability scan of the standard corporate desktop configuration should be implemented regularly. This scan helps ensure that the standard software configuration is current with the latest security patches and software, and it helps locate uncontrolled or unauthorized software.


Server Scanning

A full server vulnerability scan will determine whether the server OS has been configured to the corporate standards and whether applications are kept current with the latest security patches and software. All services must be inspected for elements that may compromise security, such as default accounts and weak passwords. Also, unauthorized programs such as Trojans may be identified.


Port Scanning

Port scanning is the process of sending a data packet to a port to gather information about the state of the port. This is also called a probe. Port scanning makes it possible to find what TCP and UDP ports are in use. For example, if ports 25, 80, and 110 are open, the device is running the SMTP, HTTP, and POP3 services.


A cracker can use port-scanning software to determine which hosts are active and which are inactive (down) in order to avoid wasting time on inactive hosts. A port scan can gather data about a single host or hosts within a subnet (256 adjacent network addresses).

A scan may first be implemented using the ping utility. Then, after determining which hosts and associated ports are active, the cracker can initiate different types of probes on the active ports.

Examples of probes are:

·         Gathering information from the Domain Name Service (DNS)


·         Determining the network services that are available, such as e-mail, FTP, and remote logon


·         Determining the type and release of the operating system


TCP/UDP Scanning Types

Many types of TCP/UDP scanning techniques exist. Some are simple and easily detectable by firewalls and intrusion detection systems, whereas some are more complicated and harder to detect.

Stealth Scans

Certain types of scans are called stealth scans because they try to evade or minimize their chances of detection. Several of the scans outlined later, such as the TCP SYN or TCP FIN scan, can be described as stealth scans.


Another example of a stealth scan is implemented through fragmenting the IP datagram within the TCP header. This will bypass some packet filtering firewalls because they don’t get a complete TCP header to match the filter rules.

Spoofed Scans

Although the term spoofing comes up often in any discussion of security, it can be applied here to conceal the true identity of an attacker. Spoofing allows an attacker to probe the target’s ports without revealing the attacker’s own IP address. The FTP proxy bounce attack described subsequently is an example of a spoofed scan that compromises a third-party FTP server.


The HPing network analysis tool, also described later, hides the source of its scans by using another host through which to probe the target site. Also, NMap provides spoofing capability by allowing the operator to enter an optional “source” address for the scanning packet.

The following are some TCP-based scanning techniques:

  • TCP connect(). Connect() is the most basic and fastest scanning technique. Connect() is able to scan ports quickly simply by attempting to connect to each port in succession. The biggest disadvantage for attackers is that it is the easiest to detect and can be stopped at the firewall.

  • TCP SYN (half open) scanning. TCP SYN scanning is often referred to as half-open scanning because, unlike TCP connect(), a full TCP connection is never opened. The scan works as follows:

1.      A SYN packet is sent to a target port.


2.      If a SYN/ACK is received, this indicates the port is listening.


3.      The scanner then breaks the connection by sending an RST (reset) packet.


4.      If an RST is received, this indicates the port is closed.


·         This is harder to trace because fewer sites log incomplete TCP connections, but some packet-filtering firewalls look for SYNs to restricted ports.


  • TCP SYN/ACK scan. TCP SYN/ACK is another way to determine whether ports are open or closed. The TCP SYN/ACK scan works as follows:

o        Scanner initially sends a SYN/ACK.


o        If the port is closed, it assumes the SYN/ACK packet was a mistake and sends an RST.


o        If the port is open, the SYN/ACK packet will be ignored and dropped.


·         This is considered a stealth scan since it isn’t likely to be logged by the host being scanned, but many intrusion detection systems may catch it.


  • TCP FIN scanning. TCP FIN is a stealth scan that works like the TCP SYN/ACK scan.

o        Scanner sends a FIN packet to a port.


o        A closed port replies with an RST.


o        An open port ignores the FIN packet.


·         One issue with this type of scanning is that TCP FIN can be used only to find listening ports on non-Windows machines or to identify Windows machines, because Windows ports send an RST regardless of the state of the port.


  • TCP FTP proxy (bounce attack) scanning. TCP FTP proxy (bounce attack) scanning is a very stealthy scanning technique. It takes advantage of a weakness in proxy FTP connections. It works like this:

o        The scanner connects to an FTP server and requests that the server initiate a data transfer process to a third system.


o        The scanner uses the PORT FTP command to declare that the data transfer process is listening on the target box at a certain port number.


o        It then uses the LIST FTP command to try to list the current directory. The result is sent over the server data transfer process channel.


o        If the transfer is successful, the target host is listening on the specified port.


o        If the transfer is unsuccessful, a “425 Can’t build data connection: Connection refused” message is sent.


·         Some FTP servers disable the proxy feature to prevent TCP FTP proxy scanning.


  • IP fragments. Fragmenting IP packets is a variation on the other TCP scanning techniques. Instead of sending a single probe packet, the packet is broken into two or more packets and reassembled at the destination, thus bypassing the packet filters.

  • ICMP scanning (ping sweep). ICMP doesn’t use ports, so this is technically not a port-scanning technique, but it should be mentioned. Using ICMP Echo requests, the scanner can perform what is known as a ping sweep. Scanned hosts will reply with an ICMP Echo reply indicating that they are alive, whereas no response may mean the target is down or nonexistent.

Determining the OS Type

Determining the type of OS is also an objective of scanning, because this will determine the type of attack to be launched.


Sometimes a target’s operating system details can be found very simply by examining its Telnet banners or its File Transfer Protocol (FTP) servers, after connecting to these services.

TCP/IP stack fingerprinting is another technique to identify the particular version of an operating system. Since OS and device vendors implement TCP/IP differently, these differences can help in determining the OS.

Some of these differences include:

·         Time To Live (TTL)


·         Initial Window Size


·         Don’t Fragment (DF) bit


·         Type of Service (TOS)


Table 3-11shows some common Time To Live values. Remember that the TTL will decrement each time the packet passes through a router. This means that the TTL of a router 6 hops away will be 249 (255 – 6).

Table 3-11: Time To Live (TTL) Values
Open table as spreadsheet

255 Many network devices, Unix and Macintosh systems 
128 Many Windows systems 
60 Hewlett-Packard Jet Direct printers 
32 Some versions of Windows 95B/98 

Another type of OS identification technique is TCP initial sequence number sampling. After the target host responds to a connection request, information about the operating system can be inferred from the pattern of the sequence numbers.


Scanning Tools

Many of these tools are used by crackers and intruders, but they also help the security administrator detect and stop malicious scans. Used with intrusion detection systems, these tools can provide some level of protection by identifying vulnerable systems, and they can provide data about the level of activity directed against a machine or network. Since scanning is a continuous activity (that is, all networked systems are being scanned all of the time), it’s very important that the security professional know what can be compromised. Some common scanning tools are:


·         Computer Oracle and Password System (COPS) - examines a system for a number of known weaknesses and alerts the administrator.


·         HPing - a network analysis tool that sends packets with non-traditional IP stack parameters. It allows the scanner to gather information from the response packets generated.


·         Legion - will scan for and identify shared folders on scanned systems, allowing the scanner to map drives directly.


·         Nessus - a free security-auditing tool for Linux, BSD, and a few other platforms. It requires a back-end server that has to run on a Unix-like platform.


·         NMap - a very common port-scanning package. More information on NMap follows this section.


·         Remote Access Perimeter Scanner (RAPS) - part of the corporate edition of PCAnywhere by Symantec. RAPS will detect most commercial remote control and backdoor packages such as NetBus, and it can help lock down PCAnywhere.


·         Security Administrator’s Integrated Network Tool (SAINT) - examines network services, such as finger, NFS, NIS, ftp and tftp, rexd, statd, and others, to report on potential security flaws.


·         System Administrator Tool for Analyzing Networks (SATAN) - is one of the oldest network security analyzers. SATAN scans network systems for well-known and often exploited vulnerabilities.


·         Tcpview - will allow identification of what application opened which port on Windows platforms.


·         Snort - is a utility used for network sniffing. Network sniffing is the process of gathering traffic from a network by capturing the data as it passes and storing it to analyze later.



NMap scans for most ports from 1 to 1024 and a number of others in the registered and undefined ranges. This helps identify software such as PCAnywhere, SubSeven, and BackOrifice. Now that a Windows interface has been written, it no longer has to be run only on a Unix system.


NMap allows scanning of both TCP and UDP ports, with root privilege required for UDP. While NMap doesn’t have signature or password-cracking capabilities, like L0phtcrack, it will estimate how hard it will be to hijack an open session.


Vulnerable Ports

Although the complete listing of well-known and registered ports is extensive, some ports are attacked more often than others. In Table 3-12, we’ve listed the ports that are the greatest risk to networked systems.


Table 3-12: Commonly Attacked Ports
Open table as spreadsheet

21 ftp File Transfer Protocol 
23 telnet Telnet virtual terminal 
25,109,110 143 smtp, pop3, imap Simple Mail Protocol, POP2, POP3, and IMAP Messaging 
53 dns Domain Name Services 
80, 8000, 8080 http Hyper-Text Transfer Protocol and HTTP proxy servers 
118 sqlserv SQL database service 
119 nntp Network News Transfer Protocol 
161 snmp Simple Network Management Protocol 
194 irc Internet Relay Chat 
389,636 ldap Lightweight Directory Access Protocol 
2049 nfs Networking File Systems 
5631 PCAnywhere PCAnywhere Remote Control 

Issues with Vulnerability Scanning

Some precautions need to be taken when the security administrator begins a program of vulnerability scanning on his or her own network. Some of these issues could cause a system crash or create unreliable scan data.


  • False positives. Some legitimate software uses port numbers registered to other software, which can cause false alarms when port scanning. This can lead to blocking legitimate programs that appear to be intrusions.

  • Heavy traffic. Port scanning can have an adverse effect on WAN links and even effectively disable slow links. Because heavy port scanning generates a lot of traffic, it is usually preferable to perform the scanning outside normal business hours.

  • False negatives. Port scanning can sometimes exhaust resources on the scanning machine, creating false negatives and not properly identifying vulnerabilities.

  • System crash. Port scanning has been known to render needed services inoperable or actually crash systems. This may happen when systems have not been currently patched or the scanning process exhausts the targeted system’s resources.

  • Unregistered port numbers. Many port numbers in use are not registered, which complicates the act of identifying what software is using them.

Hasleo Data Recovery FreeV3.2 - Free as in Freeware - Permanently from Hasleo Software "Hasleo Data Recovery FreeV3.2 100% Free Data Recovery Software...