Sunday, January 30, 2011

Recovery Toolbox File Undelete Free - "Recovery Toolbox File
Undelete Free
Fast, efficient and 100% free recovery tool for files and folders
deleted from NTFS-formatted drives.
If you have been using computers for a good while, you won't have any
problems recalling situations where you really (!) regretted having
deleted some trash files or folders. What seemed completely
unnecessary eventually turned out to be quite important and its loss
became a problem with no apparent solution. Valuable files could be
deleted by you (by mistake), by poorly written programs,
uninstallation scripts and, of course, viruses and malware of all
shapes and forms."

Saturday, January 29, 2011

Common Problem That Cause File Corrupt

Let’s look at the major causes of the disappearance and corruption of files.

Computer Viruses

Your primary concern with computer viruses is that some of them may corrupt the core files that run a particular application, which may affect how the application behaves (or if it opens at all) and may damage files you create using that application.

Sadly, today’s virus-ware now includes programs that attack other types of file formats besides those that actually execute your programs (*.exe and *.com files). For example, Word macro viruses are special viruses written to exploit the Visual Basic programming that can be added to Word and other applications to perform specific functions. Word macro viruses are abundant in corporate settings where users indiscriminately pass the infection back and forth between each other unchecked. Some of these viruses automatically send copies of themselves to contacts stored in an e-mail program’s address book, which is even worse than those bad fruitcakes that re-circulate at holiday time.

Improper Shutdown

The effects of improper shutdown can be devastating on some types of files that may be open on your desktop, including documents you’re working on at the time. These files may not be written properly to disk, may not reflect changes you made since the last time you saved the file, and could become corrupted. Always be sure you shut down your system properly whenever possible.

Even if you never shut down your system improperly, power outages can turn your PC off prematurely. If you have files open at the time of an outage, they might be corrupted when you reopen them after the system reboots.

Operating System Instability

An unstable operating system creates a ripe environment for file corruption because it can cause you to crash out of an application in which you’re working or it can freeze the system in mid-session, requiring you to reboot your PC without saving your open work. Take extra precautions to protect the files you’re working on by saving your files frequently during a session, by saving extra copies of your files, or by performing more frequent backups. Avoid installing any new applications or upgrading existing applications until the operating system is stable.

Problem Applications and Utilities

Another potential cause of file damage can be linked to unstable programs you run on your system. These can be applications that seem to generate corrupted files, applications that conflict with the operation of other programs, or applications that affect your system and the way it stores files or maintains file integrity.

You may see file damage when

  • installing older software

  • installing or using utilities written for an earlier version of Windows (more likely if the utility was written to work with systems running FAT16 file systems, such as pre-Windows 98 systems)

TipAlways check for Windows version compatibility before you buy and/or install software, just as you do with hardware. There are just enough differences between some versions of Windows and the file systems they use to cause serious data issues in some applications.

Formatting and Recovery Tools

For all intents and purposes, formatting your disk wipes it clean of data. But this isn’t entirely true. Expensive professional software packages and data recovery specialists can often look beneath a reformatted disk to extract files and information that you can no longer see. Often, however, these services get priced out of the realm of mere mortals, even if you can claim the cost as a legitimate, tax-deductible business expense.

The same is often true with the recovery disks that many manufacturers distribute with a new PC. Many of those disks work by replacing the current contents of your hard drive with a drive image of what your system looked like when it was configured at the factory, in terms of the operating system and installed applications. When you’re desperate and you use the recovery disk, you may not notice the fine print on the screen warning that you’re about to lose everything you’ve installed since you first turned on the system. And, sadly, a few recovery disks I’ve seen did not even warn you at all; those users didn’t know the implications of using the recovery disk and got a nasty surprise.

Along with recovery disks, you can lose all your current files if you don’t save them off your hard drive before you run a “go back to a previous PC time” program. Such programs allow you to revert your system to the way it was before harmful changes may have been made. System Restore does not typically replace the files you create in your programs; it focuses instead on critical system files. However, some utilities that work similar to System Restore may, so check their documentation. Utilities such as GoBack allow you to preserve the files you’ve created or stored since the last time you made a system snapshot.

Likewise, you can have the same problem restoring a drive image. Always back up your good files before you use any of these techniques so that you don’t lose valuable data.

Other Contributing Factors

Several other problems can contribute to lost or damaged files, such as

  • Power-related problems

  • Dirt and debris in the system

  • Programs running in the background at the time a file is being written to disk, such as virus scanners or anti-crash software (the latter is software which says it protects your system from unwanted crashes)

  • Having your disk (usually removable) in close proximity to serious magnetic exposure, as you might see when putting floppy drives on a stereo unit

  • Using disk utilities while also trying to save open files

Friday, January 28, 2011

What to Do if Your Hard Drive Drowns - "If you've been following the news, you are
probably aware of the flood crisis that's left parts of Australia
underwater. We recently heard from a reader who had the misfortune of
living though it personally. Like many Down Under, her computer (and
many other possessions) wound up in several feet of water. Naturally,
the data had not been backed up recently. Looking for help on what to
do to save her data, she found this article in The Sydney Morning
Herald, but it offered conflicting advice. So, she turned to us to ask
what she should do..."

Thursday, January 27, 2011

Governance Structures of healthcare industry for ICT Sourcing Solutions


Case 1: Organizing the Internal Governance Structures of a Small Health Care Unit

In the first case two small municipalities compose a health care federation running a health centre to provide health care services for the population of some 13,000 people in the area. In the Finnish scale it presents a small organization, though a fairly typical one. The organization had used a new system for over a year including hardware, EPR and network. The patient records were earlier in manual form and the new EPR is the heart of the new information system of the organization. The implementation had gone well and the staff began to get acquainted with the system. However, problems started to appear after some time. They were not yet massive, but the management noticed that they did not have enough expertise to develop the system towards the goal of better supporting organization goals and strategies.


The organization had created a comprehensive business strategy, which the new system was supposed to support. One of the most important issues in the strategy was that the organization should be able to support the provider-purchaser model. The purchaser (the municipalities) expected to get a fixed per unit price for the services they buy. To be able to satisfy the purchasers the health care federation needed a system from which they could dig the required information to set the prices. In addition, the health care centre is expected to give an assessment of the level of demand for different health services by the population.

Our research team developed an information management strategy in co-operation with the management and the staff of the federation. The ultimate goal was to solve how the federation could manage its ICT development effectively to support and also to steer business strategy. The strategy development team included our research team (three researchers) and members from the management and from the staff (three to five persons) of the federation. The development team had meetings weekly. We carried out some 60 interviews including staff and management of the federation, management of municipalities, and vendors.

With defined responsibilities the organization had a clearer structure to manage ICT, which is crucial in any governance structure attempting to economize the exchange relationships. A structure should be lasting, so in our model the ownership is not bound to a person but to a position. The persons may change but the position is most likely going to remain. With the clear structure of system owners the information flows more efficiently also to the management. Achieving a lasting structure gave possibilities to define rules to manage it — a stable structure supports our main goal, setting up a decent information management strategy.

Exchange relationships with external stakeholders became also clearer. The organization had now a structure with rules with which it could handle relationships more effectively. Especially, relations to system vendors which were earlier haphazard, became now clearer. There was now a defined contact person for each part of the system and the contacts could be conducted through them. The structure also serves the guidance functions mentioned in our definition of governance structure earlier.

The case pointed out that even or better yet especially a small organization needs clear governance structures to handle various functions. In the transaction costs point of view the health care is moving from a hierarchy where market transactions are eliminated towards a market. This change has been fast and health care organizations have faced difficulties in adopting new ways of operating. Industry has not had enough time to adjust to new situation and organizations have tried to cope with old structures. The governance structures should be emphasized especially in the case of functions with high effect on processes and activities but with little expertise to handle them in the organization. ICT is naturally not a core business in any health care organization but affects almost every process and activity when implemented.


Case 2: Organizing the External Governance Structures of a Medium-Sized Health Care Unit

In the second case the authors conducted evaluation research of the large information system implementation project called Primus during the year 2000. The project was executed in the public sector health care department in the fifth largest city of Finland. The Primus included four subprojects: EPR, telecommunications network, process development and three smaller development projects. The first two subprojects included the basic infrastructure solutions and were quite successfully implemented. The last two subprojects were more connected with the exchange relationships in daily health care, and were considerably more difficult to master. During the project, 800 users in 440 workstations began using the new patient record systems in about a hundred different units around the city's health care department. As in our previous case, also here the infrastructure and especially the EPR were in key roles. Before the project the patient records were in manual form.


Our evaluation research was divided in two parts. In the first part we evaluated the process, which led to the implementation of new ICT. We focused on management and strategic issues, negotiations, sourcing decisions, supportive issues (training, help desk) and technical solutions: aspects that were decided in the planning process of the project. The outsourcing solutions were also in close examination by the public. The network and maintenance of hardware was outsourced to a large Finnish teleoperator, which was a solution that did not satisfy everybody.

In the second part we evaluated the results of the project. The evaluation included the cost-benefit aspects and end user and patient satisfaction in order to assess how the project has influenced the activities of the organization. The research included 90 interviews, two questionnaires (staff and customers), two group interviews and one half-day seminar for interest groups. We had also meetings in research steering group at least once in a month.

During the evaluation project we also did some comparative research with a local private health care clinic. One of the main interests of comparative research was to find out which kind of organization held responsibility about ICT in the private sector and the governance structure they used.

Although these two cases are different in size and in research focus, they give an excellent opportunity to study the governance structures in public health care. Public health care organizations operate basically with the same rules and procedures. However, like was said earlier, the organization's size affects its complexity in administration. Complexity affects naturally also governance structures and the management. On the other hand, small organizations have less expertise to execute, e.g., ICT projects, and that lack makes the project complex even in that environment. Although the large organizations have more complex projects they also have more resources to solve them. We mentioned earlier also that the IT-outsourcing is more complex than many other forms of outsourcing since it pervades, affects and shapes most organization processes in some way and this is the case whether the organization is small or large.

The above is maybe a rough generalization, but comparing these two cases, in the smaller case the management was many times totally desperate since they had no expertise to solve problems and they were too small to put enough pressure for providers to have more attention. The larger organization was the largest customer to the system provider so the situation was completely different (not meaning that they did not have any problems). In these situations the contracts (rules) and trust become an essential in the governance structure perspective.

Since health care as industry has long traditions in organization and governance structures it is not easy to create new structures. However, also, health care has to learn to create and use new governance structures if it wants to keep up with the technological development. ICT facilitates and forces organizations to consider new governance structures, so in that way ICT is acting as a catalyst in renewing health care structures even deeper and wider than just ICT requires. Old structures are challenged and their existence is put under close examination.

In both cases, introduction of ICT challenged old governance structures and made it possible to introduce new structures. The smaller organization fought to find governance for the ICT internally, whereas the bigger organization made a risky outsourcing decision. The bigger organization had an opportunity to change general governance structures because of modern ICT, but this proved to be a difficult road to go.

Wednesday, January 26, 2011

Low Cost Data Recovery Services Indianapolis

Recover Lost Data in only for US$15, Flat rate. The lowest cost of data recovery you can find in Indianapolis and around the world.

Our low cost online data recovery service in Indianapolis has helped many people recover lost data in Indianapolis and around the world. The high cost of data recovery has leaded us to find way out to help people.

Online data recovery service has the experience and technical expertise to handle any type of data lost situation, such as accidentally delete your data, your hard disk partition suddenly lost cause by virus or system instability, your hard disk has been formatted or re-partition by other technicians and all your valuable data is gone, and many other logical problems.

Our data recovery services in Indianapolis have feature the industry’s most advanced recovery tools, proprietary techniques and the best expert in the business working to recover lost data

Online data recovery saves time and money because your files can be recovered in a matter of hours instead of days. Plus, recovering your data remotely can be done from the convenience of your office or home.

Contact our online data recovery expert for any inquiry and free consultation.

Taq: Data Recovery Services Indianapolis

Tuesday, January 25, 2011

What Can Company leader do About IT?

Among the board's responsibilities are reviewing and guiding corporate strategy, setting and monitoring achievement of management's performance objectives, and ensuring the integrity of the organisation's systems.

How Should the Board Address the Challenges?

The board should drive enterprise alignment by:

  • Ascertaining that IT strategy is aligned with enterprise strategy

  • Ascertaining that IT delivers against the strategy through clear expectations and measurement

  • Directing IT strategy to balance investments between supporting and growing the enterprise

  • Making considered decisions about where IT resources should be focused

The board should direct management to deliver measurable value through IT by:

  • Delivering on time and on budget

  • Enhancing reputation, product leadership and cost-efficiency

  • Providing customer trust and competitive time-to-market

The board should also measure performance by:

  • Defining and monitoring measures together with management to verify that objectives are achieved and to measure performance to eliminate surprises

  • Leveraging a system of Balanced Business Scorecards maintained by management that form the basis for executive management compensation

The board should manage enterprise risk by:

  • Ascertaining that there is transparency about the significant risks to the organisation

  • Being aware that the final responsibility for risk management rests with the board

  • Being conscious that risk mitigation can generate cost-efficiencies

  • Considering that a proactive risk management approach can create competitive advantage

  • Insisting that risk management be embedded in the operation of the enterprise

  • Ascertaining that management has put processes, technology and assurance in place for information security to ensure that:

  • Business transactions can be trusted

  • IT services are usable, can appropriately resist attacks and recover from failures

  • Critical information is withheld from those who should not have access to it

How Should Executive Management Address the Expectations?

The executive's focus is generally on cost-efficiency, revenue enhancement and building capabilities, all of which are enabled by information, knowledge and the IT infrastructure. Because IT is an integral part of the enterprise, and as its solutions become more and more complex (outsourcing, third-party contracts, networking, etc.), adequate governance becomes a critical factor for success. To this end, management should:

  • Embed clear accountabilities for risk management and control over IT into the organisation

  • Cascade strategy, policies and goals down into the enterprise and align the IT organisation with the enterprise goals

  • Provide organisational structures to support the implementation of IT strategies and an IT infrastructure to facilitate the creation and sharing of business information

  • Measure performance by having outcome measures [2], [3], [4] for business value and competitive advantage that IT delivers and performance drivers to show how well IT performs

  • Focus on core business competencies IT must support, i.e., those that add customer value, differentiate the enterprise's products and services in the marketplace, and add value across multiple products and services over time

  • Focus on important IT processes that improve business value, such as change, applications and problem management. Management must become aggressive in defining these processes and their associated responsibilities.

  • Focus on core IT competencies that usually relate to planning and overseeing the management of IT assets, risks, projects, customers and vendors

  • Have clear external sourcing strategies, focussing on the management of third-party contracts and associated service level and on building trust between organisations, enabling interconnectivity and information sharing.

[2]In this document, "stakeholder" is used to indicate anyone who has either a responsibility for or an expectation from the enterprise's IT, e.g., shareholders, directors, executives, business and technology management, users, employees, governments, suppliers, customers and the public.

[3]In this document, "board of directors" and "board" are used to indicate the body that is ultimately accountable to the stakeholders of the enterprise.

[4]The COBIT control framework refers to key goal indicators (KGIs) and key performance indicators (KPIs) for the Balanced Business Scorecard concepts of outcome measures and performance drivers.

Friday, January 21, 2011

How to Create Backup for Data Recovery

Although there are many good data recovery systems on the market, which will help you recover your lost data by a computer crash or virus attack.

It is a much better idea of the need to prevent data recovery by using a good backup software.

Cheaper than data recovery
Although costs associated with buying an automated backup software system that is not, compared to the cost of the losses of all the work and files and to pay a data recovery company to find and restore the lost files and data.

A good backup software will store many different types of data such as photos, programs, audio files, and many other systems and applications.

Benefits of a backup
So if your computer crashes or a virus then you do not get the data recovery path, with the cost and inconvenience of this, if you already have a complete backup of all your important data ready to be installed on your repaired computer.

This software protects your documents and other data by automatic backups at set times. In the case of a data recovery situation, will only lose the work you have done since the last backup.

This can also be very useful if you accidentally delete files, programs or other data is removed, because then you'll always have a backup to recover the data.

Choosing backup software
There are many different types of backup software to prevent data recovery. To find currently available software, the best source of information is usually a good quality magazine with computer experts who have done tests and assessments given to the various types of backup software.

There are also some good websites that provide reviews and information on most data recovery systems and, more importantly, what backup system you need to install to avoid the data recovery systems will need.

Backup now, not later
Although it always seems a good idea to pass on to later, is to install a backup software on your computer, really essential.

If only to avoid the need arises expensive and inappropriate data recovery solutions, which are necessary if your computer crashes or gets a virus attack.

The time you spend now on, will ensure that you then have much more peace of mind in the event that everything goes wrong with your computer or hard drives.

Recommended Backup Software
If you still have not made backup of all your valuable data, and after reading this article do a good backup software to purchase, we recommend, HDD Recovery of Blogspot, the backup software from Acronis, True Image 11.

This software allows you to back up in the background, without interrupting other operations! Reduce downtime by going back to work while restoration is in full swing! Make scheduled backups and save them in Acronis Secure Zone, a protected partition accessed with Acronis utilities only. The XP-like interface of Acronis True Image makes it even easier to implement a backup and restore operations to perform.

New features of ATI 11
Try & Decide - which option will first explore the web, downloading programs and email attachments can open your computer without risk.
Archive Encryption - an extra level of protection for your backup archives.
Privacy Protection - the erasing of data, so that someone else may not be recoverable.
System State Backup - backup your system, just as it is at a given moment.
Optimize Backup Storage - optimize backup and storage by excluding files and folders.
Automatic Catalog - The catalog of backup locations makes it easier to manage your backup archives.
Search Archives - Makes it easy to find the archives and all files from your backup archive to extract it.
Outlook Data Restore - Helps to restore your Outlook

The brand-new Acronis True Image 11 is the solution that you have waited. This cutting-edge backup software allows you to backup your PC with OS and applications. In case your system crashes, you are able to in a few minutes to recover your data. Your documents, music, photos, Outlook e-mails - just all your data and configurations will be recovered. No re-installation, just a few mouse clicks - and you're back in the air! With Acronis True Image 11, data loss is no threat anymore!

Thursday, January 20, 2011

The Public and Private Health Care Governance Issue

When organizing the services, public and private health care use quite similar processes at the operational level. The visit to a nurse or a doctor because of flu or fracture generates very much similar processes and transactions both in the public and in the private sector. The same information and materials are needed in both organizations to cure the illness so you could claim that the value chain is similar and produces the same value to the organization and to the patient. The differences appear of course in money flows, but the basic cure process is very much the same.

However, when the organizations are studied at the upper, strategic level, the difference of the governance structures starts to appear. The public and private sectors differ from each other in several ways in terms of goals, decision-making, fund allocations, job satisfaction, accountability and performance evaluation. Typically public organizations have little flexibility in terms of fund allocations and very little incentive to be innovative. Rigid procedures, structured decision making, dependence on politics, high accountability by public and administration, and temporary and politically dependent appointments are features connected to public sector organizations and employees (Aggarwal & Mirani, 1999).

The private sector has different goals as they seek to enhance stakeholders' value and maximize profits. They are more flexible than public organizations in terms of budget allocation, personnel decisions and organizational procedures. Merit and award systems are mostly well defined and new ideas that maximize firms' value are encouraged (Aggarwal & Rezaee, 1995).

As these definitions show, the difference is high, especially in organizing activities. While the public sector has to follow strict rules, private companies can organise their activities according to the market situation. When we look at the definition of governance structure and the words meaning and rules in it, the importance of the latter is very big in the public sector. Most exchange relations have to follow strict rules, say especially in purchasing: the public organizations have to organise a public competitive bidding when purchasing services or goods above a certain value.


On the other hand the word meaning has probably a stronger emphasis in the private sector. Public health care organizations are guided by national politics and political decisions, which may be well thought of, but which anyway are given from above and which are thus more distant and abstract than strategies built by the organizations themselves.

However, because of several changes in political and economic environments as well as the changes in technology, public sector is facing the same uncertain and turbulent environment as the private sector has always faced. In this new environment, public sector organizations are expected to exhibit many features usually seen in the private sector, including some scope of entrepreneurial behaviour. This shift has not been totally accepted in the public sector and there is a concern that the application of the language of consumerism, the contract culture, excess performance management and the use of quasi markets might create problems. It is argued that all these need to be balanced by approaches that recognize the value of the public sector (White, 2000). The complexity is of course dependent also on the size of the organization. The larger the organization is, the more administrative information is included (Spil, 1998).

Increased complexity and turbulent environment refers to the changing structures of the public sector. Until the last decade the structures of the public sector have remained quite stabile because of governments' strong role in steering them. Starting from the 1980s, however, decentralization and local empowerment have also invaded the public health care sector. Therefore one could say that at the moment the structures in the public sector health care are not on a permanent basis — rather they are in a turbulent phase. It might be that effectiveness cannot be achieved in the public sector because of the ongoing turnover phase of the industry.

One distinctive difference between the public and private sector which cannot be bypassed since it greatly affects governance structures through management is the group of stakeholders the sectors have to satisfy. While the private sector is to maximize the profits of the owners (to use rough generalization), the public sector has more critical stakeholders. Of course neither in the private sector is this so simple as, e.g., employees are a strong stakeholder group with its own interests inside the organization. Employee demands cannot be set aside. And despite the differences managers must work in both organizations to find a point where most of the stakeholders are satisfied most of the time. In many cases increasing the satisfaction of one group of stakeholders decreases the satisfaction of others (Dolan, 1998). This affects the structure of exchange relationships as the stakeholders eventually decide (consciously or unconsciously) whether the relationships the organization maintains are in accordance with their demands.

Another view to the public-private sector governance structure is to discuss the issue from a national perspective. The private and public sectors share the health care markets and the national government and legislation have a great effect on those shares. The obligatory governance structures play an essential, role especially in the public sector: they have many responsibilities that they cannot escape. Next we will describe some features of the roles and market shares of public and private health care using Finland as an example.

All health care services are financed mostly (60%–80%) from the state or municipal taxation and the remainder from the National Insurance Scheme (10%–20%) and co-payments. The private health care sector is seen more or less as complementing rather than competing against the public health care. The markets for the private sector have established themselves slowly, mainly because of the extensive role of the public services. By 1996 the share of private doctor consultation in Finland was 16% of all doctor consultations and the share of doctors who practice solely in the private sector was 5%. The total share of private health care services was 22%. The private sector has the strongest market share in general practice visits, dentist and physiotherapist services and in employee health services.

In Finland health care authorities at the local level have gained more independence in organizing their governance structures since the state subsidiary system changed in 1997. Earlier the rule was that local public health care should produce primary care as an internal service and that the state subsidy was granted on the bases of population, morbidity, population density and land area. Since 1997 the criteria were changed and local authorities gained more independence in organizing the services according to the local needs. They were encouraged to use methods and approaches familiar from the private sector business environment.

Some opposite developments have been seen at the international level. In European countries the need to strengthen the stewardship role of the state appeared with the introduction of new market mechanisms and the new balance between the state and market in health systems. Thus, policy makers have sought to steer these market incentives to achievement of social objectives (The European Health Report, 2002).
The government's target is therefore both to increase the independence but at the same time to steer the development. This is a hybrid form of market and hierarchies where the market works with the rules set by an entrepreneur-coordinator (government) who directs the production (transactions).

Sunday, January 16, 2011

Organization of IT Functions

IT functions can be organized in different ways in organizations. Robson (1997, p. 309) makes distinctions between centralized, decentralized and devolved as follows.


Centralized: One single-access function: IT department provides one single service, with single-access provision. A centrally located IT department provision may be a continuation of always having been centralized or may be a regrouping in response to pressures for cost savings. The centralized approach to locating IT function is effective at gaining, or regaining, control over IT. The technology and systems infrastructure can be efficiently and effectively provided and there should be few problems of data format and security or software compatibility. With one agency in control there should be no confusion over responsibilities and it will have the power to impose standards that ensure that all related parts of the business are able to interface successfully. Despite frequently being used to reduce costs, the bureaucracy and inflexibility often associated with centralized IT function can cause costs to escalate uncontrollably. The early proponents of centralization stated that computing power was proportional to the square of the cost of the processor and indicated that there is economies of scale inherent in centralized IT department.

Radical changes in cost/performance ratios have challenged these proponents' views and so there may be other, more effective, routes to a cost-effective IT infrastructure. Centralized location can lead to confusing the issues of coordination (for instance in building the infrastructure) with those of control and ownership. A centrally located IT function may also be correlated with IT making a low contribution to the business since it may be preoccupied with the complexities of its own internal concerns and so be out of touch with business priorities and so not able to respond to them.

Decentralized: Lots of single-access functions: IT function being a number of smaller single-site, single-access centers, a collection of IT departments. The proliferation of multiple IT departments brings IT geographically closer to the user community but perhaps no nearer in culture or understanding. Decentralization has some powerful advantages. Since it can be much closer to the grass roots of the business, IT has a better chance of motivating and involving users and, by distributing the involvement, the logic is that users will act in a responsible way because they are responsible (and in control, and accountable).

Decentralization focuses less on IS costs and more on user effectiveness. Local IT staff is part of the business. More business-relevant systems should be created since, with fewer, more generalist, IT staff who have less chance of being distracted, business needs are the systems drivers rather than technical interest. In addition, simpler systems may result and 'small is beautiful' and 'simple engineering is good engineering'. Whilst these points give some benefits over the centralized location of IT function, there are drawbacks; primarily what is achieved is many groups all having the same problems so that the main disadvantage is one of duplication-driven higher costs plus staff isolation in the mini-IT sections.


Since IT departments deliver their services in much the same way there is little difficulty in changing from centralized to decentralized provision. The cyclical swing between prioritizing control of centralized location and the flexibility of this decentralized location happens perhaps every five to eight years. The ease with which the change can be made makes it clear that nothing is very radically different between them. The decentralization of IS resources may be one side of this continually flipping coin or be a stage in a progression towards devolved locations. Currently there are strong pressures to lower IT resource costs; there is a growing IS literacy within the entire user community, and there is phenomenal growth in end-user computing.

All of this suggests that decentralization cannot effectively provide the balanced complement to the high degree of standardization associated with centralized IS. Highly centralized IS tends to discourage creativity since IT function's fear of chaos if standards are relaxed is a major inhibitor to the high risk, high payoff application. It would seem that the necessary complement to centralized IT must be devolved IT that will transfer authority and responsibility to where IT and the business interface, so that business-relevant innovations can emerge and be delivered from the combination.

Devolved: Geographically and managerially dispersed: IT function is a web of lateral linkages plus a significant degree of end-user control over processing and applications systems development and environment. The distinction between decentralized IT function and devolved IT function is of the degree of dispersion of control and authority. This is perhaps the structural name for the collected set of activities that include departmental computing and all forms of user self-managed computing. The advantages and disadvantages of a devolved IT location flow from this dispersal of control. Devolution adds to the technical dispersion inherent in distributed computing but replaces the central IS control with organization-wide cooperation and coordination in order to gain integration. There is still a need for automated support of activities in a devolved environment. Rather than striving for the 'lights out' operation of centralized data centers, the thrust of automation should be a systems safety.

Devolved IT function leads to a dangerous potential confusion over who will be responsible for the, perhaps unglamorous, housekeeping aspects of IS; devolution must be about who is accountable for the system in all respects. Since the devolved location leads to a risk that the business of system protection falls as no one's responsibility the answer is to automate protection as far as possible. The other area to automate as much as possible is the management of the network backbone itself. Software updates, capacity loading adjustments, etc. can be added to basic system and data hygiene housekeeping.

The costs incurred in such housekeeping may be lower since the devolution means users have a direct, vested interest in cost effectiveness. Devolution would seem to be an option favored by organizations that have a good claim to understanding the appropriate role of IS to a competitive business.

Decentralization and distributed computing tend to create islands of technology whereas devolution puts the resources where they are needed by the business, and the main driver for devolution has been the need to get the IT function closer to the business and its customers. The central IT department disappears and is replaced by a utility service that provides the organization-level needs such as network facilities, corporate planning systems, and support for the process of establishing standards and principles for IT procedures. Some central coordination and planning will remain.


Robson (1997, p. 326) finds that devolution has been highly correlated with IS significantly contributing to the business and has been supported by five thrusts:
    Downsizing trends in processing power: Powerful desktop computers make local access to any nature of system a technical reality. 
  • Growth of standards: Particularly in the area of networking, these allow 'plug and go' capabilities that therefore demand far fewer IT specialist skills. 

  • Greater IT awareness: Amongst all managers there is greater interest in using and managing IT to the business' advantage. 

  • The need to match organizational unit autonomy: Including supporting business decoupling to enable divestment programs. 

  • The drive to manage costs: In enlightened organizations this is not only to cut them (often only with the result of weakening the organization and IS), but also to make them appropriate (that is, lower than the long-term gains). Devolution places costs where gains can be judged against business productivity. 

Monday, January 10, 2011

Generic IT Balanced Scorecard

Different market situations, product strategies, business units, and competitive environments require different scorecards to fit their mission, strategy, technology, and culture. The general BSC-framework can be translated to the more specific needs of the monitoring and evaluation of the IT function, and recently the IT BSC has emerged in practice (Graeser et al., 1998; Van Grembergen & Saull, 2001). In Van Grembergen and Van Bruggen (1997) and Van Grembergen and Timmerman (1998) a generic IT scorecard is proposed consisting of four perspectives: business contribution, user orientation, operational excellence, and future orientation (Table 2). This IT scorecard differs from the company-wide BSC because it is a departmental scorecard for an internal service supplier (IT): the customers are the computer users, the business contribution is to be considered from management's point of view, the internal processes under consideration are the IT processes (systems development and operations), and the ability to innovate is measuring the use of new technologies and the human IT resources.

Table 2: Balanced Scorecard Applied to IT

How do the users view the IT department?How does management view the IT department?
To be the preferred supplier of information systems and to exploit business opportunities maximally through information technologyTo obtain a reasonable business contribution of investment inIT
    preferred supplier of applications
  • preferred supplier of operations

  • partnership with users

  • user satisfaction

    control of IT expenses
  • sell IT products and services to third parties

  • business value of new IT projects

  • business value of the IT function

How effective and efficient are the IT processes?Is IT positioned to meet future needs?
To deliver efficiently IT products and servicesTo develop opportunities to answer future challenges
    efficient software development
  • efficient operations

  • acquisition PCs and PC software

  • problem management

  • user education

  • managing IT staff

  • use of communication software

    permanent training and education of IT staff
  • expertise of IT staff

  • age of application portfolio

  • research into emerging information technologies

A detailed version of the IT BSC model is depicted in Table 2.

Sunday, January 09, 2011

Conducting a Strategic Alignment Maturity Assessment

An essential part of the assessment process is recognizing that it must be done with a team including both business and IT executives. The convergence on a consensus of the maturity levels and the discussions that ensue are extremely valuable in understanding the problems and opportunities that need to be addressed to improve business-IT alignment. The most important part of the process is the creation of recommendations addressing the problems and opportunities identified. The most difficult step, of course, is actually carrying out the recommendations. This section ties the assessment metrics together. The examples and experiences provided in Appendix A, together with the procedure described here, serves as the vehicle for validating the model.


Each of the criteria and levels are described by a set of attributes that allow a particular dimension to be assessed using a 1 to 5 Likert scale, where:
1 = this does not fit the organization, or the organization is very ineffective

  • 2 = low level of fit for the organization 

  • 3 = moderate fit for the organization, or the organization is moderately effective 

  • 4 = this fits most of the organization 

  • 5 = strong level of fit throughout the organization, or the organization is very effective 

  • Different scales can be applied to perform the assessment (e.g., good, fair, poor; 1, 2, 3). However, whatever the scale, it is important to evaluate each of the six criteria with both business and IT executives to obtain an accurate assessment. The intent is to have the team of IT and business executives converge on a maturity level. Typically, the initial review will produce divergent results. This outcome is indicative of the problems/ opportunities being addressed.

    The relative importance of each of the attributes within the criteria may differ among organizations. For example, in some organizations the use of SLAs (Service Level Agreements) might not be considered as important to alignment as the effectiveness of liaisons. Hence, giving SLAs a low maturity assessment should not significantly impact the overall rating in this case. However, it would be valuable if the group discusses why the organization does not consider a particular attribute (in this example, SLAs) to be significant.

    Using a Delphi approach with a Group Decision Support Tool (Luftman, 1997) often helps in attaining the convergence. The author's experience suggests that "discussions" among the different team members helps to ensure a clearer understanding of the problems and opportunities that need to be addressed.

    Keep in mind that the primary objective of the assessment is to identify specific recommendations to improve the alignment of IT and the business. The evaluation team, after assessing each of the six criteria from Level 1 to 5, uses the results to converge on an overall assessment level of the maturity for the firm. They apply the next higher level of maturity as a roadmap to identify what they should do next. A trained facilitator is typically needed for these sessions.

    Experience with the initial 25 Fortune 500 companies indicates that more than 80% of the organizations are at Level 2 maturity with some characteristics of Level 3 maturity. Figure 3 (including parts A through F) in Appendix A illustrates the "average" results of the Strategic Alignment Maturity assessments for these 25 companies. These results are the start of a Strategic Alignment Maturity Assessment benchmark repository. As the sample grows, it is anticipated that exemplar benchmarks based on factors such as industry, company age, and company size will be available. The figure shows the maturity attributes for each of the six maturity components. Figure 3 (without the average numbers) can be used as the basis for determining an organization's maturity level.

    The specific results of the maturity assessment for seven firms are also included in Figure 3. Keep in mind that the results of these maturity assessments were not the principal objective of this exercise. Rather, the goal is to provide the firm with specific insights regarding what it can do to improve the maturity level and thereby improve IT-business strategic alignment.
    Strategic Alignment as a ProcessThe approach applied to attain and sustain business-IT alignment focuses on understanding the alignment maturity, and on maximizing alignment enablers and minimizing inhibitors. The process (Luftman & Brier, 1999) includes the following six steps: 
      Set the goals and establish a team. Ensure that there is an executive business sponsor and champion for the assessment. Next, assign a team of both business and IT leaders. Obtaining appropriate representatives from the major business functional organizations (e.g., Marketing, Finance, R&D, Engineering) is critical to the success of the assessment. The purpose of the team is to evaluate the maturity of the business-IT alignment. Once the maturity is understood, the team is expected to define opportunities for enhancing the harmonious relationship of business and IT. Assessments range from three to twelve half-day sessions. The time demanded depends on the number of participants, the degree of consensus required, and the detail of the recommendations to carry out. 
    1. Understand the business-IT linkage. The Strategic Alignment Maturity Assessment is an important tool in understanding the business-IT linkage. The team evaluates each of the six criteria. A trained facilitator can be valuable in guiding the important discussions. 

    2. Analyze and prioritize gaps. Recognize that the different opinions raised by the participants are indicative of the alignment opportunities that exist. Once understood, the group needs to converge on a maturity level. The team must remember that the purpose of this step is to understand the activities necessary to improve the business-IT linkage. The gap between where the organization is today and where the team believes it needs to be are the gaps that need to be prioritized. Apply the next higher level of maturity as a roadmap to identify what can be done next. 

    3. Specify the actions (project management). Naturally, knowing where the organization is with regards to alignment maturity will drive what specific actions are appropriate to enhance IT-business alignment. Assign specific remedial tasks with clearly defined deliverables, ownership, timeframes, resources, risks, and measurements to each of the prioritized gaps. 

    4. Choose and evaluate success criteria. This step necessitates revisiting the goals and regularly discussing the measurement criteria identified to evaluate the implementation of the project plans. The review of the measurements should serve as a learning vehicle to understand how and why the objectives are or are not being met. 

    5. Sustain alignment. Some problems just won't go away. Why are so many of the inhibitors IT related? Obtaining IT-business alignment is a difficult task. This last step in the process is often the most difficult. To sustain the benefit from IT, an "alignment behavior" must be developed and cultivated. The criteria described to assess alignment maturity provide characteristics of organizations that link IT and business strategies. By adopting these behaviors, companies can increase their potential for a more mature alignment assessment and improve their ability to gain business value from investments in IT. Hence, the continued focus on understanding the alignment maturity for an organization and taking the necessary action to improve the IT-business harmony is key. 

    Thursday, January 06, 2011

    Used Super FDisk to Recover a Disk

    My cousin gave me her PC to fix. The hard disk was unresponsive. I
    hooked up to my PC with a SATA/USB adapter. It initially worked
    enough to get a few files but I made the mistake of restarting the PC
    like a fool and then the disk became unresponsive.
    I could still connect to the disk, just Windows reported it could not
    initialize the disk. Finally I tried Super FDisk a free software. It
    said it found a couple of logical problems on the disk. The disk
    apparently was reporting it started in the wrong logical block which
    Super FDisk offered to fix. I let it fix it, and the disk started
    working again.
    I then did the usual standard behavior of cloning the disk to a new
    one in case it decided to die again I also and scanned for viruses
    and brought the Microsoft patches up to date.

    An Introduction to Secure Remote Access

    Christina M. Bird, Ph.D, CISSP

    In the past decade, the problem of establishing and controlling remote access to corporate networks has become one of the most dif?cult issues facing network administrators and information security professionals. As information-based businesses become a larger and larger fraction of the global economy, the nature of “busi-ness” itself changes. “Work” used to take place in a well-de?ned location — such as a factory, an of?ce, or a store — at well-de?ned times, between relatively organized hierarchies of employees. But now, “work” happens everywhere: all over the world, around the clock, between employees, consultants, vendors, and customer representatives. An employee can be productive working with a personal computer and a modem in his living room, without an assembly line, a ?ling cabinet, or a manager in sight.

    The Internet’s broad acceptance as a communications tool in business and personal life has introduced the concept of remote access to a new group of computer users. They expect the speed and simplicity of Internet access to translate to their work environment as well. Traveling employees want their private network connec-tivity to work as seamlessly from their hotel room as if they were in their home of?ce. This increases the demand for reliable and ef?cient corporate remote access systems, often within organizations for whom networking is tangential at best to the core business.

    The explosion of computer users within a private network — now encompassing not only corporate employees in the of?ce, but also telecommuters, consultants, business partners, and clients — makes the design and implementation of secure remote access even tougher. In the simplest local area networks (LANs), all users have unrestricted access to all resources on the network. Sometimes, granular access control is provided at the host computer level, by restricting log-in privileges. But in most real-world environments, access to different kinds of data — such as accounting, human resources, or research & development — must be restricted to limited groups of people. These restrictions may be provided by physically isolating resources on the network or through logical mechanisms (including router access control lists and stricter ?rewall technologies). Physical isolation, in particular, offers considerable protection to network resources, and sometimes develops without the result of a deliberate network security strategy.

    Connections to remote employees, consultants, branch of?ces, and business partner networks make com-munications between and within a company extremely ef?cient; but they expose corporate networks and sensitive data to a wide, potentially untrusted population of users, and a new level of vulnerability. Allowing non-employees to use con?dential information creates stringent requirements for data classi?cation and access control. Managing a network infrastructure to enforce a corporate security policy for non-employees is a new challenge for most network administrators and security managers. Security policy must be tailored to facilitate the organization’s reasonable business requirements for remote access. At the same time, policies and proce-dures help minimize the chances that improved connectivity will translate into compromise of data con?den-tiality, integrity, and availability on the corporate network.

    Similarly, branch of?ces and customer support groups also demand cost-effective, robust, and secure network connections.

    This chapter discusses general design goals for a corporate remote access architecture, common remote access implementations, and the use of the Internet to provide secure remote access through the use of virtual private networks (VPNs).

    Security Goals for Remote Access

    All remote access systems are designed to establish connectivity to privately maintained computer resources, subject to appropriate security policies, for legitimate users and sites located away from the main corporate campus. Many such systems exist, each with its own set of strengths and weaknesses. However, in a network environment in which the protection of con?dentiality, data integrity, and availability is paramount, a secure remote access system possesses the following features:

    • Reliable authentication of users and systems

    • Easy-to-manage granular control of access to particular computer systems, ?les, and other network resources

    • Protection of con?dential data

    • Logging and auditing of system utilization

    • Transparent reproduction of the workplace environment

    • Connectivity to a maximum number of remote users and locations

    • Minimal costs for equipment, network connectivity, and support

    Reliable Authentication of Remote Users/Hosts

    It seems obvious, but it is worth emphasizing that the main difference between computer users in the of?ce and remote users is that remote users are not there. Even in a small organization, with minimal security requirements, many informal authentication processes take place throughout the day. Co-workers recognize each other, and have an understanding about who is supposed to be using particular systems throughout the of?ce. Similarly, they may provide a rudimentary access control mechanism if they pay attention to who is going in and out of the company’s server room.

    In corporations with higher security requirements, the physical presence of an employee or a computer provides many opportunities — technological and otherwise — for identi?cation, authentication, and access control mechanisms to be employed throughout the campus. These include security guards, photographic employee ID cards, keyless entry to secured areas, among many other tools.

    When users are not physically present, the problem of accurate identi?cation and authentication becomes paramount. The identity of network users is the basis for assignment of all system access privileges that will be granted over a remote connection. When the network user is a traveling salesman 1500 miles away from corporate headquarters, accessing internal price lists and databases — a branch of?ce housing a company’s research and development organization — or a business partner with potential competitive interest in the company, reliable veri?cation of identity allows a security administrator to grant access on a need-to-know basis within the network. If an attacker can present a seemingly legitimate identity, then that attacker can gain all of the access privileges that go along with it.

    A secure remote access system supports a variety of strong authentication mechanisms for human users, and digital certi?cates to verify identities of machines and gateways for branch of?ces and business partners.

    Granular Access Control

    A good remote access system provides ?exible control over the network systems and resources that may be accessed by an off-site user. Administrators must have ?ne-grain control to grant access for all appropriate business purposes while denying access for everything else. This allows management of a variety of access policies based on trust relationships with different types of users (employees, third-party contractors, etc.). The access control system must be ?exible enough to support the organization’s security requirements and easily modi?ed when policies or personnel change. The remote access system should scale gracefully and enable the company to implement more complex policies as access requirements evolve.

    Access control systems can be composed of a variety of mechanisms, including network-based access control lists, static routes, and host system- and application-based access ?lters. Administrative interfaces

    can support templates and user groups, machines, and networks to help manage multiple access policies. These controls can be provided, to varying degrees, by ?rewalls, routers, remote access servers, and authen-tication servers. They can be deployed at the perimeter of a network as well as internally, if security policy so demands.

    The introduction of the remote access system should not be disruptive to the security infrastructure already in place in the corporate network. If an organization has already implemented user- or directory-based security controls (e.g., based on Novell’s Netware Directory Service or Windows NT domains), a remote access system that integrates with those controls will leverage the company’s investment and experience.

    Protection of Con?dential Data

    Remote access systems that use public or semi-private network infrastructure (including the Internet and the public telephone network) provide lots of opportunities for private data to fall into unexpected hands. The Internet is the most widely known public network, but it is hardly the only one. Even private Frame Relay connections and remote dial-up subscription services (offered by many telecommunications providers) trans-port data from a variety of locations and organizations on the same physical circuits. Frame Relay sniffers are commodity network devices that allow network administrators to examine traf?c over private virtual circuits, and allow a surprising amount of eavesdropping between purportedly secure connections. Reports of packet leaks on these systems are relatively common on security mailing lists like BUGTRAQ and Firewall-Wizards.

    Threats that are commonly acknowledged on the Internet also apply to other large networks and network services. Thus, even on nominally private remote access systems — modem banks and telephone lines, cable modem connections, Frame Relay circuits — security-conscious managers will use equipment that performs strong encryption and per-packet authentication.

    Logging and Auditing of System Utilization

    Strong authentication, encryption, and access control are important mechanisms for the protection of corpo-rate data. But sooner or later, every network experiences accidental or deliberate disruptions, from system failures (either hardware or software), human error, or attack. Keeping detailed logs of system utilization helps to troubleshoot system failures.

    If troubleshooting demonstrates that a network problem was deliberately caused, audit information is critical for tracking down the perpetrator. One’s corporate security policy is only as good as one’s ability to associate users with individual actions on the remote access system — if one cannot tell who did what, then one cannot tell who is breaking the rules.

    Unfortunately, most remote access equipment performs rudimentary logging, at best. In most cases, call level auditing — storing username, start time, and duration of call — is recorded, but there is little information available about what the remote user is actually doing. If the corporate environment requires more stringent audit trails, one will probably have to design custom audit systems.

    Transparent Reproduction of the Workplace Environment

    For telecommuters and road warriors, remote access should provide the same level of connectivity and functionality that they would enjoy if they were physically in their of?ce. Branch of?ces should have the same access to corporate headquarters networks as the central campus. If the internal network is freely accessible to employees at work, then remote employees will expect the same degree of access. If the internal network is subject to physical or logical security constraints, then the remote access system should enable those constraints to be enforced. If full functionality is not available to remote systems, priority must be given to the most business-critical resources and applications, or people will not use it.

    Providing transparent connectivity can be more challenging than it sounds. Even within a small organization, personal work habits differ widely from employee to employee, and predicting how those differences might affect use of remote access is problematic. For example, consider access to data ?les stored on a UNIX ?le server. Employees with UNIX workstations use the Network File Service (NFS) protocol to access those ?les. NFS requires its own particular set of network connections, server con?g-urations, and security settings in order to function properly. Employees with Windows-based workstations probably use the Server Message Bus (SMB) protocol to access the same ?les. SMB requires its own set of con?guration ?les and security tuning. If the corporate remote access system fails to transport NFS

    and SMB traf?c as expected, or does not handle them at all, remote employees will be forced to change their day-to-day work processes.

    Connectivity to Remote Users and Locations

    A robust and cost-effective remote access system supports connections over a variety of mechanisms, including telephone lines, persistent private network connections, dial-on-demand network connections, and the Inter-net. This allows the remote access architecture to maintain its usefulness as network infrastructure evolves, whether or not all connectivity mechanisms are being used at any given time.

    Support for multiple styles of connectivity builds a framework for access into the corporate network from a variety of locations: hotels, homes, branch of?ces, business partners, and client sites, domestic or interna-tional. This ?exibility also simpli?es the task of adding redundancy and performance tuning capabilities to the system.

    The majority of currently deployed remote access systems, at least for employee and client-to-server remote connectivity, utilize TCP/IP as their network protocol. A smaller fraction continues to require support for IPX, NetBIOS/NetBEUI, and other LAN protocols; even fewer support SNA, DECNet, and older services. TCP/IP offers the advantage of support within most modern computer operating systems; most corporate applications either use TCP/IP as their network protocol, or allow their traf?c to be encapsulated over TCP/IP networks. This chapter concentrates on TCP/IP-based remote access and its particular set of security concerns.

    Minimize Costs

    A good remote access solution will minimize the costs of hardware, network utilization, and support personnel. Note, of course, that the determination of appropriate expenditures for remote access, reasonable return on investment, and appropriate personnel budgets differs from organization to organization, and depends on factors including sensitivity to loss of resources, corporate expertise in network and security design, and possible regulatory issues depending on industry.

    In any remote access implementation, the single highest contribution to overall cost is incurred through payments for persistent circuits, be they telephone capacity, private network connections, or access to the Internet. Business requirements will dictate the required combination of circuit types, typically based on the expected locations of remote users, the number of LAN-to-LAN connections required, and expectations for throughput and simultaneous connections. One-time charges for equipment, software, and installation are rarely primary differentiators between remote access architectures, especially in a high-security environment. However, to fairly judge between remote access options, as well as to plan for future growth, consider the following components in any cost estimates:

    • One-time hardware and software costs

    • Installation charges

    • Maintenance and upgrade costs

    • Network and telephone circuits

    • Personnel required for installation and day-to-day administration

    Not all remote access architectures will meet an organization’s business requirements with a minimum of money and effort, so planning in the initial stages is critical.

    At the time of this writing, Internet access for individuals is relatively inexpensive, especially compared to the cost of long-distance telephone charges. As long as home Internet access cost is based on a monthly ?at fee rather than per-use calculations, use of the Internet to provide individual remote access, especially for traveling employees, will remain economically compelling. Depending on an organization’s overall Internet strategy, replacing private network connections between branch of?ces and headquarters with secured Internet connections may result in savings of one third to one half over the course of a couple of years. This huge drop in cost for remote access is often the primary motivation for the evaluation of secure virtual private networks as a corporate remote access infrastructure. But note that if an organization does not already have technical staff experienced in the deployment of Internet networks and security systems, the perceived savings in terms of ongoing circuit costs can easily be lost in the attempt to hire and train administrative personnel.

    It is the security architect’s responsibility to evaluate remote access infrastructures in light of these require-ments. Remote access equipment and service providers will provide information on the performance of their

    equipment, expected administrative and maintenance requirements, and pricing. Review pricing on telephone and network connectivity regularly; the telecommunications market changes rapidly and access costs are extremely sensitive to a variety of factors, including geography, volume of voice/data communications, and the likelihood of corporate mergers.

    A good remote access system is scalable, cost-effective, and easy to support. Scalability issues include increasing capacity on the remote access servers (the gateways into the private network), through hardware and software enhancements; increasing network bandwidth (data or telephone lines) into the private network; and maintaining staff to support the infrastructure and the remote users. If the system will be used to provide mission-critical connectivity, then it needs to be designed with reliable, measurable throughput and redundancy from the earliest stages of deployment. Backup methods of remote access will be required from every location at which mission-critical connections will originate.

    Remember that not every remote access system necessarily possesses (or requires) each of these attributes. Within any given corporate environment, security decisions are based on preexisting policies, perceived threat, potential losses, and regulatory requirements — and remote access decisions, like all else, will be speci?c to a particular organization and its networking requirements. An organization supporting a team of 30 to 40 traveling sales staff, with a relatively constant employee population, has minimal requirements for ?exibility and scalability — especially since the remote users are all trusted employees and only one security policy applies. A large organization with multiple locations, ?ve or six business partners, and a sizable population of consultants probably requires different levels of remote access. Employee turnover and changing business conditions also demand increased manageability from the remote access servers, which will probably need to enforce multiple security policies and access control requirements simultaneously.

    Remote Access Mechanisms

    Remote access architectures fall into three general categories: (1) remote user access via analog modems and the public telephone network; (2) access via dedicated network connections, persistent or on-demand; and (3) access via public network infrastructures such as the Internet.


    Telephones and analog modems have been providing remote access to computer resources for the past two decades. A user, typically at home or in a hotel room, connects her computer to a standard telephone outlet and establishes a point-to-point connection to a network access server (NAS) at the corporate location. The NAS is responsible for performing user authentication, access control, and accounting, as well as maintaining connectivity while the phone connection is live. This model bene?ts from low end-user cost (phone charges are typically very low for local calls, and usually covered by the employer for long-distance tolls) and familiarity. Modems are generally easy to use, at least in locations with pervasive access to phone lines. Modem-based connectivity is more limiting if remote access is required from business locations, which may not be willing to allow essentially unrestricted outbound access from their facilities.

    But disadvantages are plentiful. Not all telephone systems are created equal. In areas with older phone networks, electrical interference or loss of signal may prevent the remote computer from establishing a reliable connection to the NAS. Even after a connection is established, some network applications (particularly time-sensitive services such as multimedia packages and applications that are sensitive to network latency) may fail if the rate of data throughput is low. These issues are nearly impossible to resolve or control from corporate headquarters.

    Modem technology changes rapidly, requiring frequent and potentially expensive maintenance of equip-ment. And network access servers are popular targets for hostile action because they provide a single point of entrance to the private network — a gateway that is frequently poorly protected.

    Dedicated Network Connections

    Branch of?ce connectivity — network connections for remote corporate locations — and business partner connections are frequently met using dedicated private network circuits. Dedicated network connections are offered by most of the major telecommunications providers. They are generally deemed to be the safest way of connecting multiple locations because the only network traf?c they carry “belongs” to the same organization.

    Private network connections fall into two categories: dedicated circuits and Frame Relay circuits. Dedicated circuits are the most private, as they provide an isolated physical circuit for their subscribers (hence, the name).

    The only data on a dedicated link belongs to the subscribing organization. An attacker can subvert a dedicated circuit infrastructure only by attacking the telecommunications provider itself. This offers substantial protec-tion. But remember that telco attacks are the oldest in the hacker lexicon — most mechanisms that facilitate access to voice lines work on data circuits as well because the physical infrastructure is the same. For high-security environments, such as ?nancial institutions, strong authentication and encryption are required even over private network connections.

    Frame Relay connections provide private bandwidth over a shared physical infrastructure by encapsulating traf?c in frames. The frame header contains addressing information to get the traf?c to its destination reliably. But the use of shared physical circuitry reduces the security of Frame Relay connections relative to dedicated circuits. Packet leak between frame circuits is well-documented, and devices that eavesdrop on Frame Relay circuits are expensive but readily available. To mitigate these risks, many vendors provide Frame Relay-speci?c hardware that encrypts packet payload, protecting it against leaks and snif?ng but leaving the frame headers alone.

    The security of private network connections comes at a price, of course — subscription rates for private connections are typically two to ?ve times higher than connections to the Internet, although discounts for high-volume use can be signi?cant. Deployment in isolated areas is challenging if telecommunications providers fail to provide the required equipment in those areas.

    Internet-Based Remote Access

    The most cost-effective way to provide access into a corporate network is to take advantage of shared network infrastructure whenever feasible. The Internet provides ubiquitous, easy-to-use, inexpensive connectivity. However, important network reliability and security issues must be addressed.

    Internet-based remote user connectivity and wide area networks are much less expensive than in-house modem banks and dedicated network circuits, both in terms of direct charges and in equipment maintenance and ongoing support. Most importantly, ISPs manage modems and dial-in servers, reducing the support load and upgrade costs on the corporate network/telecommunications group.

    Of course, securing private network communications over the Internet is a paramount consideration. Most TCP/IP protocols are designed to carry data in cleartext, making communications vulnerable to eavesdropping attacks. Lack of IP authentication mechanisms facilitates session hijacking and unauthorized data modi?cation (while data is in transit). A corporate presence on the Internet may open private computer resources to denial-of-service attacks, thereby reducing system availability. Ongoing development of next-generation Internet protocols, especially IPSec, will address many of these issues. IPSec adds per-packet authentication, payload veri?cation, and encryption mechanisms to traditional IP. Until it becomes broadly implemented, private security systems must explicitly protect sensitive traf?c against these attacks.

    Internet connectivity may be signi?cantly less reliable than dedicated network links. Troubleshooting Inter-net problems can be frustrating, especially if an organization has typically managed its wide area network connections in-house. The lack of any centralized authority on the Internet means that resolving service issues, including packet loss, higher than expected latency, and loss of packet exchange between backbone Internet providers, can be time-consuming. Recognizing this concern, many of the national Internet service providers are beginning to offer “business class” Internet connectivity, which provides service level agreements and improved monitoring tools (at a greater cost) for business-critical connections.

    Given mechanisms to ensure some minimum level of connectivity and throughput, depending on business requirements, VPN technology can be used to improve the security of Internet-based remote access. For the purposes of this discussion, a VPN is a group of two or more privately owned and managed computer systems that communicates “securely” over a public network (see Exhibit 9.1).

    Security features differ from implementation to implementation, but most security experts agree that VPNs include encryption of data, strong authentication of remote users and hosts, and mechanisms for hiding or masking information about the private network topology from potential attackers on the public network. Data in transmission is encrypted between the remote node and the corporate server, preserving data con?dentiality and integrity. Digital signatures verify that data has not been modi?ed. Remote users and hosts are subject to strong authentication and authorization mechanisms, including one-time password generators and digital certi?cates. These help to guarantee that only appropriate personnel can access and modify corporate data. VPNs can prevent private network addresses from being propagated over the public network, thus hiding potential target machines from attackers attempting to disrupt service.

    In most cases, VPN technology is deployed over the Internet (see Exhibit 9.2), but there are other situations in which VPNs can greatly enhance the security of remote access. An organization may have employees working at a business partner location or a client site, with a dedicated private network circuit back to the home campus. The organization may choose to employ a VPN application to connect its own employees back into their home network — protecting sensitive data from potential eavesdropping on the business partner network. In general, whenever a connection is built between a private network and an entity over which the organization has no administrative or managerial control, VPN technology provides valuable protection against data compromise and loss of system integrity.

    When properly implemented, VPNs provide granular access control, accountability, predictability, and robustness at least equal to that provided by modem-based access or Frame Relay circuits. In many cases, because network security has been a consideration throughout the design of VPN products, they provide a higher level of control, auditing capability, and ?exibility than any other remote access technology.

    Virtual Private Networks

    The term “virtual private network” is used to mean many different things. Many different products are marketed as VPNs, but offer widely varying functionality. In the most general sense, a VPN allows remote sites to communicate as if their networks were directly connected. VPNs also enable multiple independent networks to operate over a common infrastructure. The VPN is implemented as part of the system’s networking. That is, ordinary programs like Web servers and e-mail clients see no difference between connections across a physical network and connections across a VPN.

    VPN technologies fall into a variety of categories, each designed to address distinct sets of concerns. VPNs designed for secure remote access implement cryptographic technology to ensure the con?dentiality, authen-ticity, and integrity of traf?c carried on the VPN. These are sometimes referred to as secure VPNs or crypto VPNs. In this context, private suggests con?dentiality and has speci?c security implications: namely, that the data will be encoded so as to be unreadable, and unmodi?ed, by unauthorized parties.

    Some VPN products are aimed at network service providers. These service providers — including AT&T, UUNET, and MCI/Sprint, to name only a few — built and maintain large telecommunications networks, using infrastructure technologies like Frame Relay and ATM. The telecom providers manage large IP networks based

    on this private infrastructure. For them, the ability to manage multiple IP networks using a single infrastructure

    might be called a VPN. Some network equipment vendors offer products for this purpose and call them VPNs.

    When a network service provider offers this kind of service to an enterprise customer, it is marketed as

    equivalent to a private, leased-line network in terms of security and performance. The fact that it is implemented

    over an ATM or Frame Relay infrastructure does not matter to the customer, and is rarely made apparent.

    These so-called VPN products are designed for maintenance of telecom infrastructure, not for encapsulating

    private traf?c over public networks like the Internet, and are therefore addressing a different problem. In this

    context, the private aspect of a VPN refers only to network routing and traf?c management. It does not imply

    the use of security mechanisms such as encryption or strong authentication.

    Adding further confusion to the plethora of de?nitions, many telecommunications providers offer subscrip

    tion dial-up services to corporate customers. These services are billed as “private network access” to the

    enterprise computer network. They are less expensive for the organization to manage and maintain than in

    house access servers because the telecom provider owns the telephone circuits and network access equipment.

    But let the buyer beware. Although the providers tout the security and privacy of the subscription services,

    the technological mechanisms provided to help guarantee privacy are often minimal. The private network

    points-of-presence in metropolitan areas that provide local telephone access to the corporate network are

    typically co-located with the provider’s Internet access equipment, sometimes running over the same physical

    infrastructure. Thus, the security risks are often equivalent to using a bare-bones Internet connection for

    corporate access, often without much ability for customers to monitor security con?gurations and network

    utilization. Two years ago, the services did not encrypt private traf?c. After much criticism, service providers

    are beginning to deploy cryptographic equipment to remedy this weakness.

    Prospective customers are well-advised to question providers on the security and accounting within their

    service. The security considerations that apply to applications and hardware employed within an organization

    apply to network service providers as well, and are often far more dif?cult to evaluate. Only someone familiar

    with a company’s security environment and expectations can determine whether or not they are supported by

    a particular service provider’s capabilities.

    Selecting A Remote Access System

    For organizations with small, relatively stable groups of remote users (whether employees or branch of?ces), the cost bene?ts of VPN deployment are probably minimal relative to the traditional remote access methods.

    However, for dynamic user populations, complex security policies, and expanding business partnerships, VPN technology can simplify management and reduce expenses:

    • VPNs enable traveling employees to access the corporate network over the Internet. By using remote sites’ existing Internet connections where available, and by dialing into a local ISP for individual access, expensive long-distance charges can be avoided.

    • VPNs allow employees working at customer sites, business partners, hotels, and other untrusted loca-tions to access a corporate network safely over dedicated, private connections.

    • VPNs allow an organization to provide customer support to clients using the Internet, while minimizing risks to the client’s computer networks.

    For complex security environments requiring the simultaneous support of multiple levels of access to corporate servers, VPNs are ideal. Most VPN systems interoperate with a variety of perimeter security devices, such as ?rewalls. VPNs can utilize many different central authentication and auditing servers, simplifying management of the remote user population. Authentication, authorization, and accounting (AAA) servers can also provide granular assignment of access to internal systems. Of course, all this ?exibility requires careful design and testing — but the bene?ts of the initial learning curve and implementation effort are enormous.

    Despite the ?exibility and cost advantages of using VPNs, they may not be appropriate in some situations; for example:

    1. VPNs reduce costs by leveraging existing Internet connections. If remote users, branch of?ces, or business partners lack adequate access to the Internet, then this advantage is lost.

    2. If the required applications rely on non-IP traf?c, such as SNA or IPX, then the VPNs are more complex. Either the VPN clients and servers must support the non-IP protocols, or IP gateways (translation devices) must be included in the design. The cost and complexity of maintaining gateways in one’s network must be weighed against alternatives like dedicated Frame Relay circuits, which can support a variety of non-IP communications.

    3. In some industries and within some organizations, the use of the Internet for transmission of private data is forbidden. For example, the federal Health Care Finance Administration does not allow the Internet to be used for transmission of patient-identi?able Medicare data (at the time of this writing). However, even within a private network, highly sensitive data in transmission may be best protected through the use of cryptographic VPN technology, especially bulk encryption of data and strong authentication/digital certi?cates.

    Remote Access Policy

    A formal security policy sets the goals and ground rules for all of the technical, ?nancial, and logistical decisions involved in solving the remote access problem (and in the day-to-day management of all IT resources). Computer security policies generally form only a subset of an organization’s overall security framework; other areas include employee identi?cation mechanisms, access to sensitive corporate locations and resources, hiring and termination procedures, etc.

    Few information security managers or auditors believe that their organizations have well-documented policy. Con?gurations, resources, and executive philosophy change so regularly that maintaining up-to-date documentation can be prohibitive. But the most effective security policies de?ne expectations for the use of computing resources within the company, and for the behavior of users, operations staff, and managers on those computer systems. They are built on the consensus of system administrators, executives, and legal and regulatory authorities within the organization. Most importantly, they have clear management support and are enforced fairly and evenly throughout the employee population.

    Although the anatomy of a security policy varies from company to company, it typically includes several components.

    • A concisely stated purpose de?nes the security issue under discussion and introduces the rest of the document.

    • The scope states the intended audience for the policy, as well as the chain of oversight and authority for enforcement.

    • Theintroduction provides background information for the policy, and its cultural, technical, and economic motivators.

    • Usage expectations include the responsibilities and privileges with regard to the resource under discus-sion. This section should include an explicit statement of the corporate ownership of the resource.

    • The ?nal component covers system auditing and violation of policy: an explicit statement of an employee’s right to privacy on corporate systems, appropriate use of ongoing system monitoring, and disciplinary action should a violation be detected.

    Within the context of remote access, the scope needs to address which employees qualify for remote access to the corporate network. It may be tempting to give access to everyone who is a “trusted” user of the local network. However, need ought to be justi?ed on a case-by-case basis, to help minimize the risk of inappropriate access.

    A sample remote access policy is included in Exhibit 9.3.

    Another important issue related to security policy and enforcement is ongoing, end-user education. Remote users require speci?c training, dealing with the appropriate use of remote connectivity; awareness of computer security risks in homes, hotels, and customer locations, especially related to unauthorized use and disclosure of con?dential information; and the consequences of security breaches within the remote access system.

    EXHIBIT 9.3 Sample Remote Access Policy

    Purpose of Policy: To de?ne expectations for use of the corporate remote access server (including access via the modem bank and access via the Internet); to establish policies for accounting and auditing of remote access use; and to determine the chain of responsibility for misuse of the remote access privilege.

    Intended Audience: This document is provided as a guideline to all employees requesting access to corporate network computing resources from non-corporate locations.

    Introduction: Company X provides access to its corporate computing environment for telecommuters and traveling employees. This remote connectivity provides convenient access into the business network and facilitates long-distance work. But it also introduces risk to corporate systems: risk of inappropriate access, unauthorized data modi?cation, and loss of con?dentiality if security is compromised. For this reason, Company X provides the following standards for use of the remote access system.

    All use of the Company X remote access system implies knowledge of and compliance with this policy.

    Requirements for Remote Access: An employee requesting remote access to the Company X computer network must complete the Remote Access Agreement, available on the internal Web server or from the Human Resources group. The form includes the following information: employee’s name and log-in ID; job title, organizational unit, and direct manager; justi?cation for the remote access; and a copy of remote user responsibilities. After completing the form, and acknowledging acceptance of the usage policy, the employee must obtain the manager’s signature and send the form to the Help Desk.

    EXHIBIT 9.3 Sample Remote Access Policy (continued)

    NO access will be granted unless all ?elds are complete.

    The Human Resources group will be responsible for annually reviewing ongoing remote access for employees. This review veri?es that the person is still employed by Company X and that their role still quali?es them for use of the remote access system. Human Resources is also responsible for informing the IT/Operations group of employee terminations within one working day of the effective date of termination.

    IT/Operations is responsible for maintaining the modem-based and Internet-based remote access systems; maintaining the user authentication and authorization servers; and auditing use of the remote access system (recording start and end times of access and user IDs for chargeback accounting to the appropriate organizational units).

    Remote access users are held ultimately responsible for the use of their system accounts. The user must protect the integrity of Company X resources by safeguarding modem telephone numbers, log-in processes and start-up scripts; by maintaining their strong authentication tokens in their own possession at all times; and by NOT connecting their remote computers to other private networks at the same time that the Company X connection is active. [This provision does not include private networks maintained solely by the employee within their own home, so long as the home network does not contain independent connections to the Internet or other private (corporate) environments.] Use of another employee’s authentication token, or loan of a personal token to another individual, is strictly forbidden.

    Unspeci?ed actions that may compromise the security of Company X computer resources are also forbidden. IT/Operations will maintain ongoing network monitoring to verify that the remote access system is being used appropriately. Any employee who suspects that the remote access system is being misused is required to report the misuse to the Help Desk immediately.

    Violation of this policy will result in disciplinary action, up to and including termination of employment or criminal prosecution.

    Removing a Bios - CMOS Password - Free Article "Unfortunately, access to computers can, at times, be blocked for all of t...