Friday, September 30, 2011

Less Hardware Requirements For Windows 7 Style?

Many Windows users are holding their breath on the latest release called Windows 7. It is said that Windows 7 is a lot better than Windows Vista because of its big new features such as easier run time behavior.


What are the Requirements?


Windows 7 style shall be maintaining the status quo in terms of resource requirements. This means that users who are expecting more are in for a disappointment. Windows 7 style is said to be at best quite similar with Windows Vista in terms of resource footprint.
One of the requirements of Windows 2 style is enormous memory. Unlike Vista, which needs 2GB as the minimum re-quirement, Windows 7 shall need 4GB. Moreover, single core is passe and dual cores are a must.


Windows 7 shall also be building on top of Windows Vista's graphics, audio and storage subsystems. It shall also feature all the usual details such as the UAC, the kernel, driver integration and others.


The requirements under Windows 7 style shall be a com-puter with a 1-GHz 32-, or 64-bit (x64) processor, 1 GB of system memory, a DirectX 10 graphics card, with over 128MB of graphics memory, and a 40 GB hard drive. These requirements shall be enough to run Windows 7 style.


Microsoft Windows is trying its best to make Windows 7 more energy efficient that needs fewer resources in RAM. The release of this version in 2010 is being waited upon because of its claims of less hardware resources. But is this quite true?

Media Center Features on Windows 7 Pic-tures, Music and TV

Today's computer users are said to be more multimedia-oriented. People are more lovers of multimedia-from pictures to music to videos. The new Microsoft Windows operating system on track, the Windows 7, is allowing a more advanced and enjoyable use of different multimedia. That is possible with its enhanced Windows Media Center.


For Windows 7 pictures, users may enjoy a nice and cool graphical user interface. The user can view photos as if scattering all of these printed papers on a paper. Once a certain picture is chosen, it will be put on a center, full-screen, and full-colored. If not, the scattered pictures will only be at the background, floating, and will have a saturated black-and-white color.


Music also has a lot of cooler features. The turbo scroll for music is alphabetical. This helps for easier browsing and searching for the user's favorite artist. Not only that, Windows 7's Media Center also displays more interesting background images while playing songs. There are also some metadata about some songs that are currently on play.


TV use will also be easier with the new Windows Media Center. Users may enjoy a turbo scroll for the channel guide in chronological order. So, it will be easy to locate TV channels through the time that the user has watched certain shows.


Other multimedia functions like the TV can also be launched from the Start Menu. This permits easier access to mostly used multimedia.

Multimedia libraries are also easier to manage. Even adding, deleting, and customizing multimedia files in these libraries will be easier and more enjoyable compared with other older operating systems.


Certainly, Windows 7 pictures, movies, and TV channels will be more excited with its advanced and more interesting Windows Multimedia Center.

Thursday, September 29, 2011

What is Digital Identity and how it roles?

In 1974, the family therapist Salvador Minuchin declared that "The human experience of identity has two elements: a sense of belonging and a sense of being separate." This is as good a description of digital identity as it is our psychological identity. A digital identity contains data that uniquely describes a person or thing (called the subject or entity in the language of digital identity) but also contains information about the subject's relationships to other entities .

To see an example of this, consider the data record, stored somewhere in your state or country's computers, that represents your car. This record, commonly called a "title," contains a VIN (vehicle identification number) that uniquely identifies the car to which it belongs. In addition, it contains other attributes of the car such as year, make, model, and color. The title also contains relationships; most notably, the title relates the vehicle to a person who owns it. In many states, the title is also a historical document, because it identifies every owner of the car from the time it was made.

Digital identity management is about creating, managing, using, and eventually destroying records like the one that contains the title for your car. These records might identify a person, a car, a computer, a piece of land, or almost anything else. Sometimes these records are created simply for inventory purposes, but the more interesting ones are created with other purposes in mind: allowing or denying access to a building, authorizing the creation of a file, allowing the transfer of funds, and so on. The relationships between identities and the authorized actions associated with them make digital identities useful, though, at the same time, difficult to manage.

 

 

The world of digital identity has its own nomenclature. Most of the terms are familiar but are used in specific ways. This section introduces some of that terminology.

A subject or entity is a person, organization, software program, machine, or other thing making a request to access a resource. A resource might be a web page, a piece of data in a database, or even a transaction on a credit card. To gain access to the resource, the subject lays claim to an identity. Throughout this book, we'll frequently use the word "subject" instead of "person" to remind us that non-human subjects such as machines or programs often use resources.

In this context, identities are collections of data about a subject that represent attributes, preferences , and traits . Attributes are acquired, describing information about a subject such as medical history, past purchasing behavior, bank balance, credit rating, dress size, age, and so on. Preferences represent desires such as preferred seating on an airline, favorite brand of hot dog, use of one encryption standard over another, preferred currency, and so on. Traits are like attributes, features of the subject, but they are inherent rather than acquired. Another way of thinking about the difference between attributes and traits is that the former may change, but traits change slowly, if at all. Examples of traits include blue eyes for a person, or how and where a company was incorporated. Since the distinction between attributes, preferences, and traits rarely makes a difference in the design of an identity infrastructure, we'll typically use attributes to mean all three unless there's a specific need to distinguish among them.

Identity Scenarios in the Physical World


The concepts and words used in the last section can seem intimidating, but in reality, most of these concepts are perfectly understandable given our everyday experience in commercial transactions in the physical world. To see how some of these ideas map to our everyday understanding, let's consider a common transaction at a convenience store: buying beer.

When a person (i.e., the subject or entity) wants to buy beer (i.e., perform an action on a resource), he is required to submit proof that he is of legal drinking age. The common way to do that is by presenting a driver's license. A driver's license is a credential that asserts that a person has certain attributes and traits. The license contains authorization to perform certain tasks, specifically to drive a car. The clerk (i.e., security authority) examines the license to see if it looks real (i.e., determines the validity of the credential) and uses the picture (i.e., embedded biometric device) to see if the person presenting the license is the same person who owns it (i.e., authenticates the credential). Once certain that the license is authentic, the clerk reads the birth date (i.e., an attribute) from the license and determines whether the person is over 21 (i.e., consults a security policy determined by the state and makes a policy decision about permissions associated with the identity for a particular resource).

Now, suppose the person pays with a credit card. The credit card (a separate identity credential) is presented to the clerk. The clerk just saw the driver's license and so can establish the validity of this credential based on the first. The clerk, acting as the policy enforcement point, runs the card through the point-of-sale terminal, which transmits identity attributes from the card (the cardholder's name, credit card number, and expiration date) along with the resource to be accessed (credit in the amount necessary to buy the beer) to the bank, which acts as the policy decision point and determines whether or not the subject is entitled to credit in the necessary amount. The clerk receives the credit authorization (authorization decision assertion) and completes the transaction.

In later chapters, we'll discuss these terms in detail and see how they apply in less-familiar scenarios.

Identity, Security, and Privacy


Digital identity is often thought of as a subtopic of computer or information security. Certainly, digital identity is an important part of security, but digital identity has greater utility than just protecting information. We've already discussed how digital identity enables important business relationships. At the same time, information security is about more than simply performing authorization and authentication. Firewalls, for example, provide security but is not necessarily about identity.

Still, the goal of information security is to protect information from unauthorized access, destruction, or alteration. Privacy is the protection of the attributes, preferences, and traits associated with an identity from being disseminated beyond the subject's needs in any particular transaction. In a circular manner, privacy is built upon a foundation of good information security, which is, in turn, dependent on a good digital identity infrastructure.

IT GOVERNANCE COURSE 9 : Information Security Governance

Information security governance focuses on the availability of services, integrity of information, and protection of data confidentiality. Information security governance has become a much more important activity during the last decade. The growing web-ification of business and services has accelerated this trend. The Internet and global connectivity extend the company’s network far beyond its traditional border. This places new demands on information security and its governance. Attacks can originate from not just inside the organization, but anywhere in the world. Failure to adequately address this important concern can have serious consequences.

The High Tech Computer Crimes

The following is a general listing of the most prominent types of computer crimes

  • Denial of Service (DoS) and Distributed Denial of Service Overloading or “hogging” a system’s resources so that it is unable to provide the required services. In the distributed mode, requests for service from a particular resource can be launched from large numbers of hosts where software has been planted to become active at a particular time or upon receiving a particular command.

  • Theft of passwords - Illegally acquiring a password to gain unauthorized access to an information system.

  • Network Intrusions - Unauthorized penetrations into networked computer resources.

  • Emanation eavesdropping - Receipt and display of information that is resident on computers or terminals, through the interception of radio-frequency (RF) signals generated by those computers or terminals.

  • Social engineering - Using social skills to obtain information, such as passwords or personal identification numbers (PINs), to be used in an attack against computer-based systems.

  • Illegal content of material - Pornography is an example of this type of crime.

  • Fraud - Using computers or the Internet to perpetrate crimes such as auctioning material that will not be delivered after receipt of payment.

  • Software piracy - Illegal copying and use of software.

  • Dumpster diving - Theft of sensitive data, such as manuals and trade secrets, by gathering papers or media that have been discarded as garbage in dumpsters or at recycling locations.

  • Malicious code - Programs (such as viruses, Trojan horses, and worms) that, when activated, cause DoS or destruction/modification of the information on computers.

  • Spoofing of IP addresses - Inserting a false IP address into a message to disguise the original location of the message or to impersonate an authorized source.

  • Information warfare - Attacking the information infrastructure of a nation - including military/government networks, communication systems, power grids, and the financial community - to gain military and/or economic advantages.

  • Espionage


  • Destruction or the alteration of information


  • Use of readily available attack scripts on the Internet - Scripts that have been developed by others and are readily available through the internet, which can be employed by unskilled individuals to launch attacks on networks and computing resources.

  • Masquerading Impersonating someone else, usually to gain higher access privileges to information that is resident on networked systems.

  • Embezzlement - Illegally acquiring funds, usually through the manipulation and falsification of financial statements.

  • Data-diddling - The modification of data.


  • Terrorism

Wednesday, September 28, 2011

Five of the Best Windows 7 New Features

Being the newest operating system from Microsoft Corporation, the Windows 7 also has new features to offer. These features can either help or not in the success of the Windows 7 in the IT market. Microsoft said that Windows 7 offers a lot of new features. Five of the best among these features are discussed in the succeeding paragraphs.


The multi-touch function of Windows 7 gets the most attention among its new features. If the multi-touch function will be pursued, Windows 7 will be the first operating system to bring this to the masses. It will feature a virtual keyboard that is onscreen as well as gestures of the common actions of the mouse like right-clicking and dragging.


Another new cool feature in Windows 7 is its advanced taskbar. Taskbars in lower Windows versions are capable of showing active programs through their animated icons. But a new experience will be felt with the Windows 7 taskbar. When the user mouse over on a certain icon, a full-screen preview of its window will be shown.


Easier multimedia enjoyment can also be experienced with Windows 7 because of its new Library features. Music files will be easily grouped with this library. Instead of creating a new folder in larger storage area and rerouting the library, the user just need to give the folder location in the library and all the music files will be easily located and be under control.


Another new feature in Windows 7 is its better Windows Media Center. The new operating system also offers better connectivity with other hardware devices like mobile phones and digital cameras.

Tuesday, September 27, 2011

Low Cost Data Recovery Services San Antonio


Recover Lost Data in only for US$15, Flat rate. The lowest cost of data recovery you can find in San Antonio and around the world.


Our low cost online data recovery service in San Antonio has helped many people recover lost data in San Antonio and around the world. The high cost of data recovery has leaded us to find way out to help people.

Online data recovery service has the experience and technical expertise to handle any type of data lost situation, such as accidentally delete your data, your hard disk partition suddenly lost cause by virus or system instability, your hard disk has been formatted or re-partition by other technicians and all your valuable data is gone, and many other logical problems.

Our data recovery services in San Antonio have feature the industry’s most advanced recovery tools, proprietary techniques and the best expert in the business working to recover lost data

Online data recovery saves time and money because your files can be recovered in a matter of hours instead of days. Plus, recovering your data remotely can be done from the convenience of your office or home.

Contact our online data recovery expert for any inquiry and free consultation.

Taq: Data Recovery Services San Antonio

Saturday, September 24, 2011

Is the Digital Divide Getting Smaller?

The past five years have seen an influx of gadgets, software innovations and hardware tools which have changed the way we communicate and conduct our daily tasks. We are better connected with one another and have found cost effective ways to communicate and exchange information. Because of this, one might be tempted to ask: is the digital divide getting smaller and changing? The answer to this question of course, is a resounding YES. The reason for this is because of globalization.


The world is getting all the more closer and therefore it feels very much normal to simply "call" a friend or family member using the computer and chat for hours. The digital divide is getting smaller not just for users, but for the manufacturers and innovators as well. During these times, more and more companies realize that cooperation is the key and that one can definitely create more business (and serve other people as well) if they put their brains together. Because of this renewed fervor regarding cooperation, changes are abound - in terms of taking one thing and making it better so the end consumer or end user will have a much-improved item.


The new motto is more affordable and easier to use. The digital divide is getting smaller - and of course, as a result, things are getting easier and easier. For a lot of people who lived before these times, they can only shake their head in amazement and marvel at the wonders that technology can provide people.

Taking Advantage of News about the De­velopment of Windows 7

Last year, news was leaked out that Microsoft is develop­ing a new operating system called Windows 7. This operating system promises to upgrade and correct the issues found in Vista. Unlike in previous developments, the development of Windows 7 has been carefully put under wraps by Microsoft.

Recently however, Microsoft released a pre-beta version of Windows 7 which you can download for testing purposes. The new operating system is still in active development but Microsoft constantly release updates on its Windows 7 blog.

If you want to try Windows 7 on your computer, there are several ways how to take advantage of updates about Windows 7 development.

First, you have to download the Windows 7 CD image di­rectly from the Microsoft website. Once you transferred installa­tion file to a CD, you can now install the operating system on your computer.

Second, subscribe to the Windows 7 Engineering blog to get technical updates. Currently, device and hardware compati­bility is being discussed on the blog.

Third, you can also get development updates from other official Microsoft blogs and sites. The MSDN forum is also a good place where you can get valuable information about Win­dows 7 development.

Fourth, if you want to get third party views on Windows 7, search for the latest white papers discussing the latest Microsoft implementation. Tech Republic and ZDNet can provide white papers on this subject. You can also participate or simply read the discussions on their developer's blogs. These sites can offer valuable and important insights on Windows 7.

Microsoft Windows 7 Says Goodbye to Re­gistry: Understanding the Impact of Regi­stry in Windows 7

Before Windows 7 was released, there were rumors that Microsoft will do away with the registry system for its new operating system.

The Registry is a system generated set of data that will be used by your computer to run applications and programs. All Windows operating systems worked on the Registry system to function properly.

According to Microsoft, the Registry is used by its operat­ing system to identify specific user profiles. It can also be used to run applications, retrieve stored data, and delete previous infor­mation. In effect, the Registry serves as the motor that will keep every Windows operating system running smoothly.

One downside however is the Registry can become so massive and unwieldy that it can cause performance issues. As you use your computer, more registry keys are added to your system. These will accumulate in your system database causing major slow down of your operating system.

This led to the speculation that Windows 7 could be de­veloped with other alternatives for Registry. Specifically, users of Windows 7 can leverage the usual User Account control panel to change the system generated Registry. This scenario can also lead to mitigation of some security issues that you can find on Windows operating system

There is no indication however that Windows 7 totally eliminates the use of Registry system. Windows 7 will need the Registry system in order to be compatible with other legacy devices. This is also important in ensuring that other drivers that have previously worked on Vista will continue to work on Windows 7 implementations.

Thursday, September 22, 2011

A New Breed of Hacker Tools and Defenses


Ed Skoudis, CISSP


The state-of-the-art in computer attack tools and techniques is rapidly advancing. Yes, we still face the tried-and-true, decades-old arsenal of traditional computer attack tools, including denial-of-service attacks, password crackers, port scanners, sniffers, and RootKits. However, many of these basic tools and techniques have seen a renaissance in the past couple of years, with new features and underlying architectures that make them more powerful than ever. Attackers are delving deep into widely used protocols and the very hearts of our operating systems. In addition to their growing capabilities, computer attack tools are becoming increasingly easy to use. Just when you think you have seen it all, a new and easy-to-use attack tool is publicly released with a feature that blows your socks off. With this constant increase in the sophistication and ease of use in attack tools, as well as the widespread deployment of weak targets on the Internet, we now live in the golden age of hacking.


The purpose of this chapter is to describe recent events in this evolution of computer attack tools. To create the best defenses for our computers, one must understand the capabilities and tactics of one’s adversaries. To achieve this goal, this chapter describes several areas of advance among attack tools, including distributed attacks, active snif?ng, and kernel-level RootKits, along with defensive techniques for each type of attack.


Distributed Attacks


One of the primary trends in the evolution of computer attack tools is the movement toward distributed attack architectures. Essentially, attackers are harnessing the distributed power of the Internet itself to improve their attack capabilities. The strategy here is pretty straightforward, perhaps deceptively so given the power of some of these distributed attack tools. The attacker takes a conventional computer attack and splits the work among many systems. With more and more systems collaborating in the attack, the attacker’s chances for success increase. These distributed attacks offer several advantages to attackers, including:





  • They may be more dif?cult to detect.




  • They usually make things more dif?cult to trace back to the attacker.




  • They may speed up the attack, lowering the time necessary to achieve a given result.




  • They allow an attacker to consume more resources on a target.





So, where does an attacker get all of the machines to launch a distributed attack? Unfortunately, enormous numbers of very weak machines are readily available on the Internet. The administrators and owners of such systems do not apply security patches from the vendors, nor do they con?gure their machines securely, often just using the default con?guration right out of the box. Poorly secured computers at universities, companies of all sizes, government institutions, homes with always-on Internet connectivity, and elsewhere are easy prey for an attacker. Even lowly skilled attackers can take over hundreds or thousands of systems around the globe with ease. These attackers use automated vulnerability scanning tools, including homegrown scripts and freeware tools such as the Nessus vulnerability scanner (http://www.nessus.org), among many others, to scan large swaths of the Internet. They scan indiscriminately, day in and day out, looking to take over vulnerable systems. After taking over a suitable number of systems, the attackers will use these victim machines as part of the distributed attack against another target.


Attackers have adapted many classic computer attack tools to a distributed paradigm. This chapter explores many of the most popular distributed attack tools, including distributed denial-of-service attacks, distributed password cracking, distributed port scanning, and relay attacks.


Distributed Denial-of-Service Attacks


One of the most popular and widely used distributed attack techniques is the distributed denial-of-service (DDoS) attack. In a DDoS attack, the attacker takes over a large number of systems and installs a remotely controlled program called a zombie on each system. The zombies silently run in the background awaiting commands. An attacker controls these zombie systems using a specialized client program running on one machine. The attacker uses one client machine to send commands to the multitude of zombies, telling them to simultaneously conduct some action. In a DDoS attack, the most common action is to ?ood a victim with packets. When all the zombies are simultaneously launching packet ?oods, the victim machine will be suddenly awash in bogus traf?c. Once all capacity of the victim’s communication link is exhausted, no legitimate user traf?c will be able to reach the system, resulting in a denial of service.


The DDoS attack methodology was in the spotlight in February 2000 when several high-pro?le Internet sites were hit with the attack. DDoS tools have continued to evolve, with new features that make them even nastier. The latest generation of DDoS attacks includes extensive spoo?ng capabilities, so that all traf?c from the client to the zombies and from the zombies to the target has a decoy source address. Therefore, when a ?ood begins, the investigators must trace the onslaught back, router hop by router hop, from the victim to the zombies. After rounding up some of the zombies, the investigators must still trace from the zombies to the client, across numerous hops and multiple Internet service providers (ISPs). Furthermore, DDoS tools are employing encryption to mask the location of the zombies. In early generations of DDoS tools, most of the client software included a ?le with a list of network addresses for the zombies. By discovering such a client, an investigation team could quickly locate and eradicate the zombies. With the latest generation of DDoS tools, the list of network addresses at the client is strongly encrypted so that the client does not give away the location of the zombies.


Defenses against Distributed Denial-of-Service Attacks


To defend against any packet ?ood, including DDoS attacks, one must ensure that critical network connections have suf?cient bandwidth and redundancy to eliminate simple attacks. If a network connection is mission critical, one should have at least a redundant T1 connection because all lower connection speeds can easily be ?ooded by an attacker.


While this baseline of bandwidth eliminates the lowest levels of attackers, one must face the fact that one will not be able to buy enough bandwidth to keep up with attackers who have installed zombies on a hundred or thousand systems and pointed them at your system as a target. If one’s system’s availability on the Internet is critical to the business, one must employ additional techniques for handling DDoS attacks. From a techno-logical perspective, one may want to consider traf?c shaping tools, which can help manage the number of incoming sessions so that one’s servers are not overwhelmed. Of course, a large enough cadre of zombies ?ooding one’s connection could even overwhelm traf?c shapers. Therefore, one should employ intrusion detection systems (IDSs) to determine when an attack is underway. These IDSs act as network burglar alarms, listening to the network for traf?c that matches common attack signatures stored in the IDS database. From a procedural perspective, one should have an incident response team on stand-by for such alarms from the IDS. For mission-critical Internet connections, one must have the cell phone and pager numbers for one’s ISP’s own incident response team. When a DDoS attack begins, one’s incident response team must be able to quickly and ef?ciently marshal the forces of the ISP’s incident response team. Once alerted, the ISP can deploy ?lters in their network to block an active DDoS attack upstream.


Distributed Password Cracking


Password cracking is another technique that has been around for many years and is now being leveraged in distributed attacks. The technique is based on the fact that most modern computing systems (such as UNIX and Windows NT) have a database containing encrypted passwords used for authentication. In Windows NT, the passwords are stored in the SAM database. On UNIX systems, the passwords are located in the /etc/passwd or /etc/shadow ?les. When a user logs on to the system, the machine asks the user for a password, encrypts the value entered by the user, and compares the encrypted version of what the user typed with the stored encrypted password. If they match, the user is allowed to log in.


The idea behind password cracking is simple: steal an encrypted password ?le, guess a password, encrypt the guess, and compare the result to the value in the stolen encrypted password ?le. If the encrypted guess matches the encrypted password, the attacker has determined the password. If the two values do not match, the attacker makes another guess. Because user passwords are often predictable combinations of user IDs, dictionary words, and other characters, this technique is often very successful in determining passwords.


Traditional password cracking tools automate the guess-encrypt-compare loop to help determine passwords quickly and ef?ciently. These tools use variations of the user ID, dictionary terms, and brute-force guessing of all possible character combinations to create their guesses for passwords. The better password-cracking tools can conduct hybrid attacks, appending and prepending characters in a brute-force fashion to standard dictio-nary words. Because most passwords are simply a dictionary term with a few special characters tacked on at the beginning or end, the hybrid technique is extremely useful. Some of the best traditional password-cracking tools are L0phtCrack for Windows NT passwords (available at http://www.l0pht.com) and John the Ripper for a variety of password types, including UNIX and Windows NT (available at http://www.openwall.com).


When cracking passwords, speed rules. Tools that can create and check more password guesses in less time will result in more passwords recovered by the attacker. Traditional password cracking tools address this speed issue by optimizing the implementation of the encryption algorithm used to encrypt the guesses. Attackers can gain even more speed by distributing the password-cracking load across numerous computers. To more rapidly crack passwords, attackers will simultaneously harness hundreds or thousands of systems located all over the Internet to churn through an encrypted password ?le.


To implement distributed password cracking, an attacker can use a traditional password-cracking tool in a distributed fashion by simply dividing up the work manually. For example, consider a scenario in which an attacker wants to crack a password ?le with ten encrypted passwords. The attacker could break the ?le into ten parts, each part containing one encrypted password, and then distribute each part to one of ten machines. Each machine runs a traditional password-cracking tool to crack the one encrypted password assigned to that system. Alternatively, the attacker could load all ten encrypted passwords on each of the machines and con?gure each traditional password-cracking tool to guess a different set of passwords, focusing on a different part of a dictionary or certain characters in a brute-force attack.


Beyond manually splitting up the work and using a traditional password-cracking tool, several native distributed password-cracking tools have been released. These tools help to automate the spreading of the workload across several machines and coordinate the computing resources as the attack progresses. Two of the most popular distributed password-cracking tools are Mio-Star and Saltine Cracker, both available at http://packet-storm.securify.com/distributed


Defenses against Distributed Password Cracking


The defenses against distributed password cracking are really the same as those employed for traditional password cracking: eliminate weak passwords from your systems. Because distributed password cracking speeds up the cracking process, passwords need to be even more dif?cult to guess than in the days when nondistributed password cracking ruled. One must start with a policy that mandates users to establish passwords that are greater than a minimum length (such as greater than nine characters) and include numbers, letters, and special characters in each password. Users must be aware of the policy; thus, an awareness program emphasizing the importance of dif?cult-to-guess passwords is key. Furthermore, to help enforce a password policy, one may want to deploy password-?ltering tools on one’s authentication servers. When a user establishes a new password, these tools check the password to make sure it conforms to the password policy. If the password is too short, or does not include numbers, letters, and special characters, the user will be asked to select another password. The pass?lt.dll program included in the Windows NT Resource Kit and the passwd+ program on UNIX systems implement this type of feature, as do several third-party add-on authentication products. One also may want to consider the elimination of standard passwords from very sensitive environments, using token-based access technologies.


Finally, security personnel should periodically run a password-cracking tool against one’s own users’ pass-words to identify the weak ones before an attacker does. When weak passwords are found, there should be a de?ned and approved process for informing users that that they should select a better password. Be sure to get appropriate permissions before conducting in-house password-cracking projects to ensure that manage-ment understands and supports this important security program. Not getting management approval could negatively impact one’s career.


Distributed Port Scanning


Another attack technique that lends itself well to a distributed approach is the port scan. A port is an important concept in the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP), two protocols used by the vast majority of Internet services. Every server that receives TCP or UDP traf?c from a network listens on one or more ports. These ports are like little virtual doors on a machine, where packets can go in or come out. The port numbers serve as addresses on a system where the packets should be directed. While an administrator can con?gure a network service to listen on any port, the most common services listen on well-known ports, so that client software knows where to send the packets. Web servers usually listen on TCP port 80, while Internet mail servers listen on TCP port 25. Domain Name Servers listen for queries on UDP port 53. Hundreds of other ports are assigned to various services in RFC 1700, a document available at http:/ /www.ietf.org/rfc.html.


Port scanning is the process of sending packets to various ports on a target system to determine which ports have listening services. It is similar to knocking on the doors of the target system to see which ones are open. By knowing which ports are open on the target system, the attacker has a good idea of the services running on the machine. The attacker can then focus an attack on the services associated with these open ports. Furthermore, each open port on a target system indicates a possible entry point for an attacker. The attacker can scan the machine and determine that TCP port 25 and UDP port 53 are open. This result tells the attacker that the machine is likely a mail server and a DNS server. While there are a large number of traditional port-scanning tools available, one of the most powerful (by far) is the Nmap tool, available at http://www.inse-cure.org.


Because a port scan is often the precursor to a more in-depth attack, security personnel often use IDS tools to detect port scans as an early-warning indicator. Most IDSs include speci?c capabilities to recognize port scans. If a packet arrives from a given source going to one port, followed by another packet from the same source going to another port, followed by yet another packet for another port, the IDS can quickly correlate these packets to detect the scan. This traf?c pattern is shown on the left-hand side of Exhibit 11.1, where port numbers are plotted against source network address. IDSs can easily spot such a scan, and ring bells and whistles (or send an e-mail to an administrator).


Now consider what happens when an attacker uses a distributed approach for conducting the scan. Instead of a barrage of packets coming from a single address, the attacker will con?gure many systems to participate in the scan. Each scanning machine will send only one or two packets and receive the results. By working together, the scanning machines can check all of the interesting ports on the target system and send their result to be correlated by the attacker. An IDS looking for the familiar pattern of the traditional port scan will not detect the attack. Instead, the pattern of incoming packets will appear more random, as shown on the right side of Exhibit 11.1. In this way, distributed scanning makes detection of attacks more dif?cult.


Of course, an IDS system can still detect the distributed port scan by focusing on the destination address (i.e., the place where the packets are going) rather than the source address. If a number of systems suddenly sends packets to several ports on a single machine, an IDS can deduce that a port scan is underway. But the attacker has raised the bar for detection by conducting a distributed scan. If the distributed scan is conducted over a longer period of time (e.g., a week or a month), the chances of evading an IDS are quite good for an attacker. Distributed port scans are also much more dif?cult to trace back to an attacker because the scan comes from so many different systems, none of which are owned by the attacker.


Several distributed port-scanning tools are available. An attacker can use the descriptively named Phpdis-tributedportscanner, which is a small script that can be placed on Web servers to conduct a scan. Whenever attackers take over a PHP-enabled Web server, they can place the script on the server and use it to scan other systems. The attacker interacts with the individual scanning scripts running on the various Web servers using HTTP requests. Because everything is Web based, distributed port scans are quite simple to run. This scanning tool is available at http://www.digitaloffense.net:8000/phpDistributedPortScanner/. Other distributed port scanners tend to be based on a client/server architecture, such as Dscan (available at http://packet-storm.securify.com/distributed) and SIDEN (available at http://siden.sourceforge.net).


Defenses against Distributed Scanning


The best defense against distributed port scanning is to shut off all unneeded services on one’s systems. If a machine’s only purpose is to run a Web server that communicates via HTTP and HTTPS, the system should have only TCP port 80 and TCP port 443 open. If one does not need a mail server running on the same machine as the Web server, one should con?gure the system so that the mail server is deactivated. If the X Window system is not needed on the machine, turn it off. All other services should be shut off, which would close all other ports. One should develop a secure con?guration document that provides a step-by-step process for all system administrators in an organization for building secure servers.


Additionally, one must ensure that IDS probes are kept up-to-date. Most IDS vendors distribute new attack signatures on a regular basis — usually once a month. When a new set of attack signatures is available, one should quickly test it and deploy it on the IDS probes so they can detect the latest batch of attacks.


Relay Attacks


A ?nal distributed attack technique involves relaying information from machine to machine across the Internet to obscure the true source of the attack. As one can expect, most attackers do not want to get caught. By setting up extra layers of indirection between an attacker and the target, the attacker can avoid being apprehended. Suppose an attacker takes over half a dozen Internet-accessible machines located all over the world and wants to attack a new system. The attacker can set up packet redirector programs on the six systems. The ?rst machine will forward any packets received on a given port to the second system. The second system would then forward them to the third system, and so on, until the new target is reached. Each system acts as a link in a relay chain for the attacker’s traf?c. If and when the attack is detected, the investigation team will have to trace the attack back through each relay point before ?nding the attacker.


Attackers often set up relay chains consisting of numerous systems around the globe. Additionally, to further foil investigators, attackers often try to make sure there is a great change in human language and geopolitical relations between the countries where the links of the relay chain reside. For example, the ?rst relay may be in the United States, while the second may be in China. The third could be in India, while the fourth is in Pakistan. Finally, the chain ends in Iran for an attack against a machine back in the United States. At each stage of the relay chain, the investigators would have to contend with dramatic shifts in human language, less-than-friendly relations between countries, and huge law enforcement jurisdictional issues.


Relay attacks are often implemented using a very ?exible tool called Netcat, which is available for UNIX at http://www.l0pht.com/users/10pht/ nc110.tgz, and for Windows NT at http://www.l0pht.com/~weld/netcat/. Another popular tool for creating relays is Redir, located at http:// oh.verio.com/~sammy/hacks.


Defenses against Relay Attacks


Because most of the action in a relay attack occurs outside an organization’s own network, there is little one can do to prevent such attacks. One cannot really stop attackers from bouncing their packets through a bunch of machines before being attacked. One’s best bet is to make sure that systems are secure by applying  security patches and shutting down all unneeded services. Additionally, it is important to cooperate with law enforce-ment of?cials in their investigations of such attacks.


Active Snif?ng


Snif?ng is another, older technique that is being rapidly expanded with new capabilities. Traditional sniffers are simple tools that gather traf?c from a network. The user installs a sniffer program on a computer that captures all data passing by the computer’s network interface, whether it is destined for that machine or another system. When used by network administrators, sniffers can capture errant packets to help troubleshoot the network. When used by attackers, sniffers can grab sensitive data from the network, such as passwords, ?les, e-mail, or anything else transmitted across the network.


Traditional Snif?ng


Traditional snif?ng tools are passive; they wait patiently for traf?c to pass by on the network and gather the data when it arrives. This passive technique works well for some network types. Traditional Ethernet, a popular tech-nology used to create a large number of local area networks (LANs), is a broadcast medium. Ethernet hubs are devices used to create traditional Ethernet LANs. All traf?c sent to any one system on the LAN is broadcast to all machines on the LAN. A traditional sniffer can therefore snag any data going between other systems on the same LAN. In a traditional snif?ng attack, the attacker takes over one system on the LAN, installs a sniffer, and gathers traf?c destined for other machines on the same LAN. Some of the best traditional sniffers include Snort (available at http://www.snort.org) and Snif?t (available at http://reptile.rug.ac.be/~coder/snif?t/snif?t.html).


One of the commonly used defenses against traditional sniffers is a switched LAN. Contrary to an Ethernet hub, which acts as a broadcast medium, an Ethernet switch only sends data to its intended destination on the LAN. No other system on the LAN is able to see the data because the Ethernet switch sends the data to its appropriate destination and nowhere else. Another commonly employed technique to foil traditional sniffers is to encrypt data in transit. If the attackers do not have the encryption keys, they will not be able to determine the contents of the data sniffed from the network. Two of the most popular encryption protocols are the Secure Socket Layer (SSL), which is most often used to secure Web traf?c, and Secure Shell (SSH), which is most often used to protect command-line shell access to systems.


Raising the Ante with Active Snif?ng


While the defenses against passive sniffers are effective and useful to deploy, attackers have developed a variety of techniques for foiling them. These techniques, collectively known as active snif?ng, involve injecting traf?c into the network to allow an attacker to grab data that should otherwise be unsniffable. One of the most capable active snif?ng programs available is Dsniff, available at http://www.monkey.org/~dugsong/dsniff/. One can explore Dsniff ’s various methods for snif?ng by injecting traf?c into a network, including MAC address ?ooding, spurious ARP traf?c, fake DNS responses, and person-in-the-middle attacks against SSL.


MAC Addresses Flooding


An Ethernet switch determines where to send traf?c on a LAN based on its media access control (MAC) address. The MAC address is a unique 48-bit number assigned to each Ethernet card in the world. The MAC address indicates the unique network interface hardware for each system connected to the LAN. An Ethernet switch monitors the traf?c on a LAN to learn which plugs on the switch are associated with which MAC addresses. For example, the switch will see traf?c arriving from MAC address AA:BB:CC:DD:EE:FF on plug number one. The switch will remember this information and send data destined for this MAC address only to the ?rst plug on the switch. Likewise, the switch will autodetect the MAC addresses associated with the other network interfaces on the LAN and send the appropriate data to them.


One of the simplest, active snif?ng techniques involves ?ooding the LAN with traf?c that has bogus MAC addresses. The attacker uses a program installed on a machine on the LAN to generate packets with random MAC addresses and feed them into the switch. The switch will attempt to remember all of the MAC addresses as they arrive. Eventually, the switch’s memory capacity will be exhausted with bogus MAC addresses. When their memory ?lls up, some switches fail into a mode where traf?c is sent to all machines connected to the LAN. By using MAC ?ooding, therefore, an attacker can bombard a switch so that the switch will send all traf?c to all machines on the LAN. The attacker can then utilize a traditional sniffer to grab the data from the LAN.


Spurious ARP Traf?c


While some switches fail under a MAC ?ood in a mode where they send all traf?c to all systems on the LAN, other switches do not. During a ?ood, these switches remember the initial set of MAC addresses that were autodetected on the LAN, and utilize those addresses throughout the duration of the ?ood. The attacker cannot launch a MAC ?ood to overwhelm the switch. However, an attacker can still undermine such a LAN by injecting another type of traf?c based on the Address Resolution Protocol (ARP).


ARP is used to map Internet Protocol (IP) addresses into MAC addresses on a LAN. When one machine has data to send to another system on the LAN, it formulates a packet for the destination’s IP address; however, the IP address is just a con?guration setting on the destination machine. How does the sending machine with the packet to deliver determine which hardware device on the LAN to send the packet to? ARP is the answer. Suppose a machine on the LAN has a packet that is destined for IP address 10.1.2.3. The machine with the packet will send an ARP request on the LAN, asking which network interface is associated with IP address


10.1.2.3. The machine with this IP address will transmit an ARP response, saying, in essence, “IP Address


10.1.2.3 is associated with MAC address AA:BB:CC:DD:EE:FF.” When a system receives an ARP response, it stores the mapping of IP address to MAC address in a local table, called the ARP table, for future reference. The packet will then be delivered to the network interface with this MAC address. In this way, ARP is used to convert IP addresses into MAC addresses so that packets can be delivered to the appropriate network interface on the LAN. The results are stored in a system’s ARP table to minimize the need for additional ARP traf?c on the LAN.


ARP includes support for a capability called the “gratuitous ARP.” With a gratuitous ARP, a machine can send an ARP response although no machine sent an ARP request. Most systems are thirsty for ARP entries in their ARP tables, to help improve performance on the LAN. In another form of active snif?ng, an attacker utilizes faked gratuitous ARP messages to redirect traf?c for snif?ng a switched LAN, as shown in Exhibit 11.2. For the exhibit, the attacker’s machine on the LAN is indicated by a black hat.


The steps of this attack, shown in Exhibit 11.2, are:





  1. The attacker activates IP forwarding on the attacker’s machine on the LAN. Any packets directed by the switch to the black-hat machine will be redirected to the default router for the LAN.




  2. The attacker sends a gratuitous ARP message to the target machine. The attacker wants to sniff traf?c sent from this machine to the outside world. The gratuitous ARP message will map the IP address of the default router for the LAN to the MAC address of the attacker’s own machine. The target machine accepts this bogus ARP message and enters it into its ARP table. The target’s ARP table is now poisoned with the false entry.




  3. The target machine sends traf?c destined for the outside world. It consults its ARP table to determine the MAC address associated with the default router for the LAN. The MAC address it ?nds in the ARP table is the attacker’s address. All data for the outside world is sent to the attacker’s machine.




  4. The attacker sniffs the traf?c from the line.




  5. The IP forwarding activated in Step 1 redirects all traf?c from the attacker’s machine to the default router for the LAN. The default router forwards the traf?c to the outside world. In this way, the victim will be able to send traf?c to the outside world, but it will pass through the attacker’s machine to be sniffed on its way out.





This sequence of steps allows the attacker to view all traf?c to the outside world from the target system. Note that, for this technique, the attacker does not modify the switch at all. The attacker is able to sniff the switched LAN by manipulating the ARP table of the victim. Because ARP traf?c and the associated MAC address information are only transmitted across a LAN, this technique only works if the attacker controls a machine on the same LAN as the target system.


Fake DNS Responses


A technique for injecting packets into a network to sniff traf?c beyond a LAN involves manipulating the Domain Name System (DNS). While ARP is used on a LAN to map IP addresses to MAC addresses on a LAN, DNS is used across a network to map domain names into IP addresses. When a user types a domain name into some client software, such as entering www.skoudisstuff.com into a Web browser, the user’s system sends out a query to a DNS server. The DNS server is usually located across the network on a different LAN. Upon receiving the query, the DNS server looks up the appropriate information in its con?guration ?les and sends a DNS response to the user’s machine that includes an IP address, such as 10.22.12.41. The DNS server maps the domain name to IP address for the user.


Attackers can redirect traf?c by sending spurious DNS responses to a client. While there is no such thing as a gratuitous DNS response, an attacker that sits on any network between the target system and the DNS server can sniff DNS queries from the line. Upon seeing a DNS query from a client, the attacker can send a fake DNS response to the client, containing an IP address of the attacker’s machine. The client software on the users’ machine will send packets to this IP address, thinking that it is communicating with the desired server. Instead, the information is sent to the attacker’s machine. The attacker can view the information using a traditional sniffer, and relay the traf?c to its intended destination.


Person-in-the-Middle Attacks against SSL


Injecting fake DNS responses into a network is a particularly powerful technique when it is used to set up a person-in-the-middle attack against cryptographic protocols such as SSL, which is commonly used for secure Web access. Essentially, the attacker sends a fake DNS response to the target so that a new SSL session is established through the attacker’s machine. As highlighted in Exhibit 11.3, the attacker uses a specialized relay tool to set up two cryptographic sessions: one between the client and the attacker, and the other between the attacker and the server. While the data moves between these sessions, the attacker can view it in cleartext.


The steps shown in Exhibit 11.3 include:





  1. The attacker activates Dsniff ’s dnsspoof program, a tool that sends fake DNS responses. Additionally, the attacker activates another Dsniff tool called “webmitm,” an abbreviation for Web Monkey-in-the-Middle. This tool implements a specialized SSL relay.




  2. The attacker observes a DNS query from the victim machine and sends a fake DNS response. The fake DNS response contains the IP address of the attacker’s machine.




  3. The victim receives the DNS response and establishes an SSL session with the IP address included in the response.




  4. The webmitm tool running on the attacker’s machine established an SSL session with the victim machine, and another SSL session with the actual Web server that the client wants to access.




  5. The victim sends data across the SSL connection. The webmitm tool decrypts the traf?c from the SSL connection with the victim, displays it for the attacker, and encrypts the traf?c for transit to the external Web server. The external Web server receives the traf?c, not realizing that a person-in-the-middle attack is occurring.




While this technique is quite effective, it does have one limitation from the attacker’s point of view. When establishing the SSL connection between the victim and the attacker’s machine, the attacker must send the victim an SSL digital certi?cate that belongs to the attacker. To decrypt all data sent from the target, the attacker must use his or her own digital certi?cate, and not the certi?cate from the actual destination Web server. When the victim’s Web browser receives the bogus certi?cate from the attacker, it will display a warning message to the user. The browser will indicate that the certi?cate it was presented by the server was signed by a certi?cate authority that is not trusted by the browser. The browser then gives the user the option of establishing the connection by simply clicking on a button labeled “OK” or “Connect.” Most users do not understand the warning messages from their browsers and will continue the connection without a second thought. The browser will be satis?ed that it has established a secure connection because the user told it to accept the attacker’s certi?cate. After continuing the connection, the attacker will be able to gather all traf?c from the SSL session. In essence, the attacker relies on the fact that trust decisions about SSL certi?cates are left in the hands of the user.


The same basic technique works against the Secure Shell (SSH) protocol used for remote command-shell access. Dsniff includes a tool called sshmitm that can be used to set up a person-in-the-middle attack against SSH. Similar to the SSL attack, Dsniff establishes two SSH connections: one between the victim and the attacker, and another between the attacker and the destination server. Also, just as the Web browser complained about the modi?ed SSL certi?cate, the SSH client will complain that it does not recognize the public key used by the SSH server. The SSH client will still allow the user, however, to override the warning and establish the SSH session so the attacker can view all traf?c.


Defenses against Active Snif?ng Techniques


Having seen how an attacker can grab all kinds of useful information from a network using snif?ng tools, how can one defend against these attacks? First, whenever possible, encrypt data that gets transmitted across the network. Use secure protocols such as SSL for Web traf?c, SSH for encrypted log-in sessions and ?le transfer, S/MIME for encrypted e-mail, and IPSec for network-layer encryption. Users must be equipped to apply these tools to protect sensitive information, both from a technology and an awareness perspective.


It is especially important that system administrators, network managers, and security personnel understand and use secure protocols to conduct their job activities. Never telnet to ?rewall, routers, sensitive servers, or public key infrastructure (PKI) systems! It is just too easy for an attacker to intercept one’s password, which telnet transmits in cleartext. Additionally, pay attention to those warning messages from the browser and SSH client. Do not send any sensitive information across the network using an SSL session created with an untrusted certi?cate. If the SSH client warns that the server public key mysteriously changed, there is need to investigate.


Additionally, one really should consider getting rid of hubs because they are just too easy to sniff through. Although the cost may be higher than hubs, switches not only improve security, but also improve performance. If a complete migration to a switched network is impossible, at least consider using switched Ethernet on critical network segments, particularly the DMZ.


Finally, for networks containing very sensitive systems and data, enable port-level security on your switches by con?guring each switch port with the speci?c MAC address of the machine using that port to prevent MAC ?ooding problems and fake ARP messages. Furthermore, for extremely sensitive networks, such as Internet DMZs, use static ARP tables on the end machines, hard coding the MAC addresses for all systems on the LAN. Port security on a switch and hard-coded ARP tables can be very dif?cult to manage because swapping components or even Ethernet cards requires updating the MAC addresses stored in several systems. For very sensitive networks such as Internet DMZs, this level of security is required and should be implemented.


The Proliferation of Kernel-Level RootKits


Just as attackers are targeting key protocols such as ARP and DNS at a very fundamental level, so too are they exploiting the heart of our operating systems. In particular, a great deal of development is underway on kernel-level RootKits. To gain a better understanding of kernel-level RootKits, one should ?rst analyze their evolu-tionary ancestors, traditional RootKits.


Traditional RootKits


A traditional RootKit is a suite of tools that allows an attacker to maintain superuser access on a system. Once an attacker gets root-level control on a machine, the RootKit lets the attacker maintain that access. Traditional RootKits usually include a backdoor so the attacker can access the system, bypassing normal security controls. They also include various programs to let the attacker hide on the system. Some of the most fully functional traditional RootKits include Linux RootKit 5 (lrk5) and T0rnkit, which runs on Solaris and Linux. Both of these RootKits, as well as many others, are located at http://packetstorm.securify.com/UNIX/penetration/ rootkits.


Traditional RootKits implement backdoors and hiding mechanisms by replacing critical executable programs included in the operating system. For example, most traditional RootKits include a replacement for the /bin/ login program, which is used to authenticate users logging into a UNIX system. A RootKit version of /bin/ login usually includes a backdoor password, known by the attacker, that can be used for root-level access of the machine. The attacker will write the new version of /bin/login over the earlier version, and modify the timestamps and ?le size to match the previous version.


Just as the /bin/login program is replaced to implement a backdoor, most RootKits include Trojan horse replacement programs for other UNIX tools used by system administrators to analyze the system. Many traditional RootKits include Trojan horse replacements for the ls command (which normally shows the contents of a directory). Modi?ed versions of ls will hide the attacker’s tools, never displaying their presence. Similarly, the attackers will replace netstat, a tool that shows which TCP and UDP ports are in use, with a modi?ed version that lies about the ports used by an attacker. Likewise, many other system programs will be replaced, including ifcon?g, du, and ps. All of these programs act like the eyes and ears of a system administrator. The attacker utilizes a traditional RootKit to replace these eyes and ears with new versions that lie about the attacker’s presence on the system.


To detect traditional RootKits, many system administrators employ ?le system integrity checking tools, such as the venerable Tripwire program available at http://www.tripwire.com. These tools calculate cryptographically strong hashes of critical system ?les (such as /bin/login, ls, netstat, ifcon?g, du, and ps) and store these digital ?ngerprints on a safe medium such as a write-protected ?oppy disk. Then, on a periodic basis (usually daily or weekly), the integrity-checking tool recalculates the hashes of the executables on the system and compares them with the stored values. If there is a change, the program has been altered, and the system administrator is alerted.


Kernel-Level RootKits


While traditional RootKits replace critical system executables, attackers have gone even further by implementing kernel-level RootKits. The kernel is the heart of most operating systems, controlling access to all resources,



System with Traditional RootKit System with Kernel-Level RootKit




























good



good



good



good



login



ps



ifconfig



tripwire







Kernel



Trojan Kernel Module




such as the disk, system processor, and memory. Kernel-level RootKits modify the kernel itself, rather than manipulating application-level programs like traditional RootKits. As shown on the left side of Exhibit 11.4, a traditional RootKit can be detected because a ?le system integrity tool such as Tripwire can rely on the kernel to let it check the integrity of application programs. When the application programs are modi?ed, the good Tripwire program utilizes the good kernel to detect the Trojan horse replacement programs.


A kernel-level RootKit is shown on the right-hand side of Exhibit 11.4. While all of the application programs are intact, the kernel itself is rotten, facilitating backdoor access by the attacker and lying to the administrator about the attacker’s presence on the system. Some of the most powerful kernel-level RootKits include Knark for Linux available at http://packetstorm. securify.com/UNIX/penetration/rootkits, Plasmoid’s Solaris kernel-level RootKit available at http://www.infowar.co.uk/thc/slkm-1.0.html, and a Windows NT kernel-level RootKit available at http://www.rootkit.com.


While a large number of kernel-level RootKits have been released with a variety of features, the most popular capabilities of these tools include:





  • Execution redirection. This capability intercepts a call to run a certain application and maps that call to run another application of the attacker’s choosing. Consider a scenario involving the UNIX /bin/login routine. The attacker will install a kernel-level RootKit and leave the /bin/login ?le unaltered. All execution requests for /bin/login (which occur when anyone logs in to the system) will be mapped to the hidden ?le /bin/backdoorlog in. When a user tries to login, the /bin/backdoorlogin program will be executed, containing a backdoor password allowing for root-level access. However, when the system administrator runs a ?le integrity checker such as Tripwire, the standard /bin/login routine is analyzed. Only execution is redirected; one can look at the original ?le /bin/login and verify its integrity. This original routine is unaltered, so the Tripwire hash will remain the same.




  • File hiding. Many kernel-level RootKits let an attacker hide any ?le in the ?le system. If any user or application looks for the ?le, the kernel will lie and say that the ?le is not present on the machine. Of course, the ?le is still on the system, and the attacker can access it when required.




  • Process hiding. In addition to hiding ?les, the attacker can use the kernel-level RootKit to hide a running process on the machine.





Each of these capabilities is quite powerful by itself. Taken together, they offer an attacker the ability to completely transform the machine at the attacker’s whim. The system administrator will have a view of the system created by the attacker, with everything looking intact. But in actuality, the system will be rotten to the core, quite literally. Furthermore, detection of kernel-level RootKits is often rather dif?cult because all access to the system relies on the attacker-modi?ed kernel.


Kernel-Level RootKit Defenses


To stop attackers from installing kernel-level RootKits (or traditional RootKits, for that matter), one must prevent the attackers from gaining superuser access on one’s systems in the ?rst place. Without superuser access, an attacker cannot install a kernel-level RootKit. One must con?gure systems securely, disabling all unneeded services and applying all relevant security patches. Hardening systems and keeping them patched are the best preventative means for dealing with kernel-level RootKits.


Another defense involves deploying kernels that do not support loadable kernel modules (LKMs), a feature of some operating systems that allows the kernel to be dynamically modi?ed. LKMs are often used to implement kernel-level RootKits. Linux kernels can be built without support for kernel modules. Unfortunately, Solaris systems up through and including Solaris 8 do not have the ability to disable kernel modules. For critical Linux systems, such as Internet-accessible Web, mail, DNS, and FTP servers, one should build the kernels of such systems without the ability to accept LKMs. One will have eliminated the vast majority of these types of attacks by creating nonmodular kernels.


Conclusions


The arms race between computer defenders and computer attackers continues to accelerate. As attackers devise methods for widely distributed attacks and burrow deeper into our protocols and operating systems, we must work even more diligently to secure our systems. Do not lose heart, however. Sure, the defensive techniques covered in this chapter can be a lot of work. However, by carefully designing and maintaining systems, one can maintain a secure infrastructure.

How to Install Free Windows 7 on Your Computer?

 

Microsoft has officially released the beta version of Win­dows 7 to the general public. You can download a free Windows 7 copy directly from the Microsoft website. The beta version is a trial copy so it has an expiration date.

When you attempt to download the free Windows 7 beta version, you will be prompted to register to the Microsoft net­work. Registration will enable you to get the provisional license keys issued by Microsoft for Windows 7 Beta.

After downloading the program, you can now install it on your computer. Here are the simple steps you can follow for a quick installation of Windows 7.

First, make sure that you have DVD burning device in­stalled on your computer. You will need this device in order to transfer the installation file from your computer to DVD. The program is not executable. It is formatted as an ISO image so it can be read only through a DVD drive.

Second, burn the Windows 7 copy to your blank DVD. You can now back-up your data and files if you will perform full system installation of Windows 7. However, you can also allot a hard drive partition where your Windows 7 can reside. You can have dual OS on your computer if you install Windows 7 in a partitioned hard drive.

Third, insert the Windows 7 disk into the DVD ROM de­vice. You must reboot your computer so your system can detect the operating system on start up. Simply follow the instructions of the installation wizard and in just a few minutes, the Windows 7 operating system will be running on your machine.

Wednesday, September 21, 2011

Enterprise Governance – Budgeting Best Practice

The Problems with Traditional Budgeting

Much of the blame for corporate governance failure could be attributed to flawed budgeting systems, as traditional bottom-up budgeting not only consumes a huge amount of executives' time, but forces them into endless rounds of dull meetings and tense negotiations. Many believe the traditional approach also encourages line managers to 'play the game' by setting targets low and inflating results. The main concern is that budgeting has become so embedded in corporate life that it is now accepted as 'business as usual' - no matter how destructive.

The traditional budget also fails to allow for change within the fiscal year. Budgets based on a 12-month year generally do not allow room for innovation as they are unable to take unforeseen events into account. According to Hope and Fraser,3 founders of the Beyond Budgeting movement, budgets not only act as barriers to change but actually fail to provide the order and control managers believe them to do. Hope and Fraser go on to condemn budgets because:

• They encourage incremental thinking and tend to set ceilings on growth expectations and a floor for cost reductions, thus stifling real improvement break-through.

• They do not deliver on shareholder value, an increasingly important issue.

• They fail to provide the CEO with reliable numbers, both current and forecast. Budgets are typically extrapolations of existing trends with little attention being paid to the future.

• They act as barriers to exploiting synergies across the business units - they endorse the parochial behaviour of 'defend your own turf'.

• They are overly bureaucratic, time-consuming exercises. Other identifiable disadvantages include the following:4

1 They become obsolete too quickly and add little value given the time required to prepare them.

2 Budgets concentrate on cost reduction and not on value creation.

3 They are a form of corporate/central control.

4 There is little active participation from line managers and lots of interference from centre to 'make the numbers'.

5 They are time-consuming and costly to compile; they typically consume between 20-30 per cent ofmanagement's time.

6 They constrain responsiveness and flexibility and are often cited as being barriers to change. The focus is often on beating the budget and not maximising the organisation's potential.

7 Budgets are rarely strategically focused - they tend to be internally driven and focus on current year results.

8 Budgets add little value - finance personnel spend most of their time putting the information together while only about a quarter of their time doing any analysis.

9 Budgets encourage political game playing - this is evident when the budget results are linked to remuneration.5

10 One of the most dangerous shortcomings is that the process often ignores, and consequently sabotages strategic planning.6 Budgets do not reflect the emerging network structures that organisations are adopting.7

 

 

Alternatives to Traditional Budgeting

Now that organisations are aware of the shortcomings of the traditional budget technique, many have turned their attention towards re-inventing the budget so that it becomes a continuous planning process.

Rolling Forecasts

As the organisation's objectives and strategies change, unlike the static traditional budget, this continuous budget can change with it. This method of dynamic budgeting is known as rolling forecasts, where forecasts are updated every few months - in effect, reassessing the company's outlook several times a year. In this way the financial forecast not only reflects a business's most recent monthly results but also any material changes to its business outlook or the economy. Rolling forecasts have a bigger emphasis on the strategic objectives of the organisation and help to narrow the gap between the overall strategic plan and the operational budget, which the traditional approach failed to do effectively.

'The budgeting process is quickly changing from a once-a-year event to a dynamic process that's in a constant state of flux. Organisations are finding that they can compete far more effectively when they truly understand business conditions and can adjust their budgeting to reflect the opportunities and challenges,' explains Lee Geishecker, a research analyst with Gartner in Stamford, Connecticut, US.

Geishecker believes that a dynamic budgeting model can produce enormous dividends by providing key insights into trends, patterns and changing circumstances. This enables companies to budget strategically rather than simply reacting to data that is six months or a year old. Geishecker adds that the dynamic budgeting process meshes with the trend towards a more strategic finance department. Instead of number crunching, managing data and distributing spreadsheets, finance managers can use dynamic budgeting to transform the numbers into knowledge.

Says Geishecker: 'For the first time, companies have the tools to execute their business plan and mission with a good deal of precision.'

A rolling budget demands that employees and managers adopt an entirely different mind set. It requires finance not only to collect, sort and analyse data, but also to strengthen organisational links and help company managers understand the dynamics of the enterprise. Company managers must share information appro­priately and use it to maximum advantage.

Boston-based consultancy the Aberdeen Group8 suggest in their report e-Planning: Fixing the Broken Planning Process, that organisations that have already successfully adopted rolling forecasts also have an integrated software system that can do the following:

• Gather information at weekly or monthly intervals, rather than annually or semi-annually.

• Adapt to the information needs of professionals in different positions throughout the enterprise.

• Reconcile as opposed to merely equalising top-down planning and bottom-up budgeting nearly instantaneously.

• Encourage what-if modelling, dynamic goal setting, gap analysis and financial analysis.

Zero-Based Budgeting (ZBB)

Unlike rolling forecasts, ZBB requires managers to budget their activities as if the activities had no prior allocations or balances -in other words, the starting point is zero. It became popular in the 1970s and 1980s and proved to be a useful one-off exercise to review discretionary overheads. As these are a large and growing proportion oftotal costs in many firms, significant cost reductions and resource allocations can often be achieved through ZBB. When used effectively, it forces management to look at the upcoming operation and all the costs associated with those operations. Starting from zero effectively forces managers to forecast their anticipated resource requirements.9

However, it can be labour-intensive and relies on individual managers to be able to construct their budget in the detail required. Another problem is that ZBB is applied hierarchically by functional department whereas the real opportunities for improvement are more likely to be found by reviewing costs by business process.10

 

 

Activity-Based Budgeting (ABB)

ABB is a concept developed to consider costs from the perspective of their relationship with the activities and through­puts of the organisation. Whereas activity-based costing (see Chapter 5) attempts to improve the understanding management has of costs, ABB takes the next step in this process and uses the information for developing detailed targets and forecasts.

This approach offers a number of advantages including better identification of resource needs, the ability to set more realistic budgets, increased staff participation, and clearer linking of costs with staff responsibilities.11

 

 

Beyond Budgeting

The Beyond Budgeting movement began life as a research project carried out by the Consortium for Advanced Manufac­turing International (CAM-I) - a professional organisation created to improve the strategic process. According to Beyond Budgeting advocates, the traditional performance management model is too rigid to reflect today's fast-moving economy. As such, they view the traditional budget as acting as a form of 'control by constraint'.

Their research concludes that not only do firms need more effective strategic management but also need to redesign their organisations to devolve authority more effectively to the front line. Beyond Budgeting companies therefore aim to create consistent value streams by giving managers control of their actions and using simple measures based on key value drivers geared to beating competition. Leading and lagging indicators help to monitor value creation and provide an early warning system against a financial downturn.12

At the core ofthe Beyond Budgeting philosophy lies a shift in emphasis from performance management based on agreed budget targets to one based on people, empowerment and adaptive management processes. This concept is further underpinned by the following Beyond Budgeting principles:13

• Governance - use clear values and boundaries as a basis for action, not mission statements and plans.

• Performance responsibility - make managers responsible for competitive results, not for meeting the budget.

• Delegation - give people the freedom and ability to act, do not control and constrain them.

• Structure - organise around the networks and processes not functions and departments.

• Co-ordinate - co-ordinate cross-company interactions through process design and fast information systems, not detailed action through budgets.

• Leadership - challenge and coach people, do not command and control them.

• Goal setting - beat competitors not budgets.

• Strategy process - make the strategy process a continuous and inclusive process, not a top-down annual event.

• Anticipatory management - use anticipatory systems for mana­ging strategy, not to make short-term corrections.

• Resource management - make resources available to operations when required at a fair cost, do not allocate them from the centre.

• Measurement and control - use a few key indicators to control the business, not a mass of detailed reports.

• Motivation and rewards - base rewards on a company and unit-level competitive performance, not pre-determined targets.

To date, there have been a number of adopters of Beyond Budgeting including IKEA, Volvo Cars and Swedish bank Svenska Handelsbanken, which abandoned budgets in 1970. Since adopting this technique Handelsbanken has grown to 8000 employees, 530 branches, earning 80 per cent of the group's profit. It now has one of the lowest cost to income ratios of the 30 largest universal banks in Europe.14

However, according to a survey conducted by CIMA in 2000,15 which asked 1000 of its members for their budgeting experiences between 1995 and 2000 and what they thought the future might hold, budgets were and will continue to be the most important tool for management accountants in fulfilling their organisational role.

Process Improvement Techniques and the Challenge for Budgeting

Leading companies are achieving more accurate, faster and lower costs by using explicit forecasting models. They are deliberately separate from their financial management systems. The models are based on clear assumptions, when the criteria change, the assumptions change and a new forecast is generated quickly with virtually no manual intervention.

 

 

Best Practices

• Forecasts should be 'Assumption not opinion based'.16

• 'Lean not mean'17 - the cost of budgeting and planning. Many companies are reducing the costs ofthe financial planning and reporting by judicious investment in IT to create common, widely accessible, cost and revenue databases. They are designed to create a single view of the company, reducing duplication of effort in running separate systems. Leading companies are also light on their review process, focusing on a few key financial measures and not reviewing every line item.

• Strategy execution18 - competition not budget focused. Leading companies are extremely externally focused; compar­isons are made not with budget but with competition. Targets are not based on current performance but by reference to external benchmarks. Incentives are disconnected from budget achievement and focused on beating the competition - both financially and in terms of achieving externally benchmarked non-financial targets.

• Action not explanation oriented. Leading companies are less concerned with the explanation of past performance than managing future results. This is done by forecasting and explaining variances before the variance occurs; managing the forecast and not the actual results; focusing on taking the actions that really drive performance, most of which are non-financial.

• Strategically not financially managed. Those who lead in the approach understand that better financial performance comes from developing and executing good competitive strategies; it does not come solely from better financial management. They plan and manage investments separately from the day-to-day operation of the business. They focus more on achievements ofnon-financial targets than they do on the monthly financial results. Under traditional budgeting, control was exercised by business units and divisions reporting their actual performance and variances with HQ virtually left to use this information to predict what the year-end result would be. Now those who take the non-traditional approach trust their managers to tell them what they will achieve.

In conclusion, it is evident that the traditional budget is simply inadequate in today's fast-changing climate. Many improvements have been sought by all and the chosen approach will depend much on the individual organisation itself. To date, the most common approach seems to be the process of rolling budgets or re-forecasts which do attack the problem of inefficiencies involved with the annual budget. However, the literature does seem to show that although these rolling budgets are an improvement, they still do not solve the problem of the cost and effort involved in the budgeting process.

This highlights the argument for web-based applications. These applications are all aimed at streamlining the budgeting cycle and evidence shows that many organisations are advocating this. They take into account the process of rolling forecasts and with their web capability shorten the time and effort involved mainly through a central database and multiple user allowance. To some, these systems may be seen as being in their infancy and very costly to deploy, however, the question is whether they improve the budgeting process and the literature finds they do.

As the countless advertisements for consultancy and software firms remind us, the rules of the game are changing. In today's environment ofglobal competition, situations can change rapidly and new competitors enter markets with ease. Organisations have long been involved in planning and evaluating their performance through measuring financial returns, setting performance stan­dards and comparing budgetary outcomes with plans. For effective enterprise management, this involves the measurement of both overall and business unit performance in relation to the objectives identified in the planning process. In this way, corporate performance management systems are a key factor in ensuring the successful implementation of an organisation's strategy.

Even with excellent budgeting and forecasting systems in place, predicting customer behaviour and revenues is fraught with difficulty:

It all comes down to demand. The limitations of any system are always going to be tied to the fact that you are waiting on inputs from somebody or something. If you are really going to forecast demand correctly, you have to know - with absolute certainty - that a customer is going to give you in three weeks the order you are forecasting today. Unfortunately, nobody has yet invented the virtual crystal ball or crystalball.com. If they have, I would love to see it.

(Jonathan Chadwick, VP of Corporate Finance and Planning at Cisco, following the company's write-off of $2.2bn excess inventory in early April 2001.)19

Collaborative Planning - Reinventing World-Class Budgeting

One of the criticisms of the alternatives to planning and budgeting is that they attempt to replace the existing process rather than fix or remedy its shortcomings. Our research shows that while many ofthe criticisms levelled against budgets are well founded, in many cases finance professionals recognise that budgeting does play an important role with respect to cost control, resource allocation and other important areas.

So why has the traditional budget survived for so long? We believe that despite the bad reviews from academics and consultants that budgeting survives because:

• It is simple, if time-consuming, but not as resource hungry as some of the alternatives proposed.

• It is scalable and can be used by organisations ranging from a corner shop to a global corporation.

• It was the first tool devised to enable strategy to be turned into action.

• It encourages a consistent view across the organisation.

As previously discussed, the CIMA (2000) survey showed that budgets were, and will continue to be, the most important tool for management accountants in fulfilling their organisational role.

Although alternative methods may give the suggestion that overhauling the budget is the necessary requirement in today's volatile environment, the need for forecasting and planning is still required. Cash flow forecasts, rolling forecasts and cost forecasts are still a major part of Beyond Budgeting, which therefore still leaves the problem of time and quality of information.

What if instead of trying to replace or move beyond budgeting, we could fix budgeting and make it better? Richard Harborne, a management consultant with PwC, suggests that the budgeting process is missing two critical important characteristics: (1) it should be a fully integrated process; and (2) it should contain a complete set of process components (Figure 6.1). Since business planning cuts across internal functional and geographic boundaries so it demands that the components ofthe process and activities of the people involved are seamlessly integrated.

Producing a strategic plan that is disconnected from the operations or tracking performance measures that do not reflect corporate realities will probably cause decision-makers to make incorrect and costly decisions. In addition, a fully integrated set

clip_image003
of components is required that fits your company's size, structure, industry and business model. It means taking the time to clarify the reasons why you plan and define the outputs of the process. Doing this requires a collaborative environment in which managers can work together to share insights, ideas and their mental model of the business across time and locations.

Hasleo Data Recovery FreeV3.2 - Free as in Freeware - Permanently from Hasleo Software

https://www.hasleo.com/win-data-recovery/free-data-recovery.html "Hasleo Data Recovery FreeV3.2 100% Free Data Recovery Software...