Wednesday, May 12, 2010
Demilitarized zone (DMZ)
A Demilitarized zone (DMZ) is an area where you can place a public server for access bypeople you might not trust otherwise. By isolating a server in a DMZ, you can hide or
remove access to other areas of your network. You can still access the server using your network,but others aren't able to access further network resources. This can be accomplished using firewalls to isolate your network.
When establishing a DMZ, you assume that the person accessing the resource isn't necessarily someone you would trust with other information. Figure 1.13 shows a server placed in a DMZ. Notice that the rest of the network isn't visible to external users. This lowers the threat of intrusion in the internal network.The easiest way to create a DMZ is to use a firewall that can transmit in three directions:Tip:Anytime you want to separate public information from private information,a DMZ is an accceptable option.
to the internal network, to the external world (Internet), and to the public information you’re
sharing (the DMZ). From there, you can decide what traffic goes where; for example, HTTP
traffic would be sent to the DMZ, and e-mail would go to the internal network.
Security Zones:Exteranet
An extranet is illustrated in the figure above. Note that this network provides a connection
between the two organizations. The connection may be through the Internet; if so, these
networks would use a tunneling protocol to accomplish a secure connection.
Security Zones:Interanet
Intranets are private networks implemented and maintained by an individual company or
organization. You can think of an intranet as an Internet that doesn't leave your company;
it's internal to the company, and access is limited to systems within the intranet. Intranets use the same technologies used by the Internet. They can be connected to the Internet but can't be accessed by users who aren't authorized to be part of them; the anonymous user of the Internet is instead an authorized user of the intranet. Access to the intranet is granted to trusted users inside the corporate network or to users in remote locations.
Security Zones:The Internet
Security Zones
Over time, networks can become complex beasts. What may have started as a handful of computers sharing resources can quickly grow to something resembling an electrician's nightmare. The networks may even appear to have lives of their own. It's common for a network to have connections among departments, companies, countries, and public access using private communication paths and through the Internet.
Not everyone in a network needs access to all the assets in the network. The term security zone describes design methods that isolate systems from other systems or networks.
You can isolate networks from each other using hardware and software. A router is a good example of a hardware solution: You can configure some machines on the network to be in a certain address range and others to be in a different address range. This separation makes the two networks invisible to each other unless a router connects them. Some of the newer data switches also allow you to partition networks into smaller networks or private zones.
When discussing security zones in a network, it's helpful to think of them as rooms.
You may have some rooms in your house or office that anyone can enter. For other rooms,access is limited to specific individuals for specific purposes. Establishing security zones is a similar process in a network: Security zones allow you to isolate systems from unauthorized users. Here are the four most common security zones you'll encounter:
- Internet
- Intranet
- Extranet
- Demilitarized zone (DMZ)
The next few posts identify the topologies used to create security zones to provide
Security. The Internet has become a boon to individuals and to businesses, but it creates a challenge for security. By implementing intranets, extranets, and DMZs, you can create a reasonably secure environment for your organization.
Accountability:A real story…
Accountability, like common sense, applies to every aspect of information technology.
Several years ago, a company that relied on data that could never be re-created wrote shell scripts to do backups early in the morning when the hosts were less busy. Operators at those machines were told to insert a tape in the drive around midnight and check back at 3:00 a.m. to make certain that a piece of paper had been printed on the printer, signaling the end of the job. If the paper was there, they were to remove the tapes and put them in storage; if the paper was not there, they were to call for support.
The inevitable hard drive crash occurred on one of the hosts one morning, and an IT
"specialist" was dispatched to swap it out. The technician changed the hard drive and
then asked for the most recent backup tape. To his dismay, the data on the tape was two years old. The machine crash occurred before the backup operation ran, he reasoned, but the odds of rotating two years' worth of tapes was pretty amazing. Undaunted, he asked for the tape from the day before, and found that the data on it was also two years old.
Beginning to sweat, he found the late shift operator for that host and asked her if she was making backups. She assured him that she was and that she was rotating the tapes and putting them away as soon as the paper printed out. Questioning her further on how the data could be so old, she said she could verify her story because she also kept the pieces of paper that appeared on the printer each day. She brought out the stack and handed them to him. They all reported the same thing—tape in drive is write protected.
Where did the accountability lie in this true story? The operator was faithfully following
the procedures given to her. She thought the fact that the tape was protected represented a good thing. It turned out that all the hosts had been printing the same message, and none of them had been backed up for a long while.
The problem lay not with the operator, but with the training she was given. Had she been shown what correct and incorrect backup completion reports looked like, the data would never have been lost.
Saturday, May 1, 2010
بیش از 110 دستور در Run
Add Hardware Wizard==>hdwwiz.cpl
Add/Remove Programs==>appwiz.cpl
Administrative Tools==>control admintools
Automatic Updates==>wuaucpl.cpl
Bluetooth Transfer Wizard==>fsquirt
Calculator==>calc
Certificate Manager==>certmgr.msc
Character Map==>charmap
Check Disk Utility==>chkdsk
Clipboard Viewer==>clipbrd
Command Prompt==>cmd
Component Services==>dcomcnfg
Computer Management==>compmgmt.msc
Date and Time Properties==>timedate.cpl
DDE Shares==>ddeshare
Device Manager==>devmgmt.msc
Direct X Control Panel - If Installed==>directx.cpl
Direct X Troubleshooter==>dxdiag
Disk Cleanup Utility==>cleanmgr
Disk Defragment==>dfrg.msc
Disk Management==>diskmgmt.msc
Disk Partition Manager==>diskpart
Display Properties==>control desktop
Display Properties==>desk.cpl
Display Properties w/Appearance Tab Preselected==>control color
Dr. Watson System Troubleshooting Utility==>drwtsn32
Driver Verifier Utility==>verifier
Event Viewer==>eventvwr.msc
File Signature Verification Tool==>sigverif
Findfast==>findfast.cpl
Folders Properties==>control folders
Fonts==>control fonts
Fonts Folder==>fonts
Free Cell Card Game==>freecell
Game Controllers==>joy.cpl
Group Policy Editor - XP Pro==>gpedit.msc
Hearts Card Game==>mshearts
Iexpress Wizard==>iexpress
Indexing Service==>ciadv.msc
Internet Properties==>inetcpl.cpl
IP Configuration - Display Connection Configuration==>ipconfig /all
IP Configuration - Display DNS Cache Contents==>ipconfig /displaydns
IP Configuration - Delete DNS Cache Contents==>ipconfig /flushdns
IP Configuration - Release All Connections==>ipconfig /release
IP Configuration - Renew All Connections==>ipconfig /renew
IP Configuration - Refreshes DHCP & Re-Registers DNS==>ipconfig /registerdns
IP Configuration - Display DHCP Class ID==>ipconfig /showclassid
Java Control Panel - If Installed==>jpicpl32.cpl
Java Control Panel - If Installed==>javaws
Keyboard Properties==>control keyboard
Local Security Settings==>secpol.msc
Local Users and Groups==>lusrmgr.msc
Logs You Out Of Windows==>logoff
Microsoft Chat==>winchat
Minesweeper Game==>winmine
Mouse Properties==>control mouse
Mouse Properties==>main.cpl
Network Connections==>control netconnections
Network Connections==>ncpa.cpl
Network Setup Wizard==>netsetup.cpl
Notepad==>notepad
Nview Desktop Manager - If Installed==>nvtuicpl.cpl
Object Packager==>packager
ODBC Data Source Administrator==>odbccp32.cpl
On Screen Keyboard==>osk
Opens AC3 Filter - If Installed==>ac3filter.cpl
Password Properties==>password.cpl
Performance Monitor==>perfmon.msc
Performance Monitor==>perfmon
Phone and Modem Options==>telephon.cpl
Power Configuration==>powercfg.cpl
Printers and Faxes==>control printers
Printers Folder==>printers
Private Character Editor==>eudcedit
Quicktime - If Installed==>QuickTime.cpl
Regional Settings==>intl.cpl
Registry Editor==>regedit
Registry Editor==>regedit32
Remote Desktop==>mstsc
Removable Storage==>ntmsmgr.msc
Removable Storage Operator Requests==>ntmsoprq.msc
Resultant Set of Policy - XP Pro==>rsop.msc
Scanners and Cameras==>sticpl.cpl
Scheduled Tasks==>control schedtasks
Security Center==>wscui.cpl
Services==>services.msc
Shared Folders==>fsmgmt.msc
Shuts Down Windows==>shutdown
Sounds and Audio==>mmsys.cpl
Spider Solitare Card Game==>spider
SQL Client Configuration==>cliconfg
System Configuration Editor==>sysedit
System Configuration Utility==>msconfig
System File Checker Utility - Scan Immediately==>sfc /scannow
System File Checker Utility - Scan Once At Next Boot==>sfc /scanonce
System File Checker Utility - Scan On Every Boot==>sfc /scanboot
System File Checker Utility - Return to Default Setting==>sfc /revert
System File Checker Utility - Purge File Cache==>sfc /purgecache
System File Checker Utility - Set Cache Size to size x==>sfc /cachesize=x
System Properties==>sysdm.cpl
Task Manager==>taskmgr
Telnet Client==>telnet
User Account Management==>nusrmgr.cpl
Utility Manager==>utilman
Windows Firewall==>firewall.cpl
Windows Magnifier==>magnify
Windows Management Infrastructure==>wmimgmt.msc
Windows System Security Tool==>syskey
Windows Update Launches==>wupdmgr
Windows XP Tour Wizard==>tourstart
Wordpad==>write
Monday, April 19, 2010
FTP
What is FTP, and how do I use it to transfer files?
Overview
FTP is an acronym for File Transfer Protocol. As the name suggests, FTP is used to transfer files between computers on a network. You can use FTP to exchange files between computer accounts, transfer files between an account and a desktop computer, or access online software archives. Keep in mind, however, that many FTP sites are heavily used and require several attempts before connecting.
How to use FTP
Graphical FTP clients
Graphical FTP clients simplify file transfers by allowing you to drag and drop file icons between windows. When you open the program, enter the name of the FTP host (e.g., ftp.empire.gov) and your username and password. If you are logging into an anonymous FTP server, you may not have to enter anything. Two common FTP programs are Cyberduck (for Mac) and WinSCP (for Windows).
Configure The FTP Service
To configure the FTP Service to allow only anonymous connections, follow these steps:
1-Start Internet Information Services Manager or open the IIS snap-in.
2-Expand Server_name, where Server_name is the name of the server.
3-Expand FTP Sites
4-Right-click Default FTP Site, and then click Properties.
5-Click the Security Accounts tab.
6-Click to select the Allow Anonymous Connections check box (if it is not already selected), and then click to select the Allow only anonymous connections check box.
When you click to select the Allow only anonymous connections check box, you configure the FTP Service to allow only anonymous connections. Users cannot log on by using user names and passwords.
7-Click the Home Directory tab.
8-Click to select the Read and Log visits check boxes (if they are not already selected), and then click to clear the Write check box (if it is not already cleared).
9-Click OK.
10-Quit Internet Information Services Manager or close the IIS snap-in.
Creating and Configuring FTP Sites in Windows Server 2003
Internet Information Services 6 (IIS 6) is a powerful platform for building and hosting web sites for both the Internet and corporate intranets. IIS 6 is also equally useful for setting up FTP sites for either public or corporate use, and in this article we''ll walk through the process of creating and configuring FTP sites using both the GUI (IIS Manager) and scripts included in Windows Server 2003. The specific tasks we''ll walk through in this article are:
1-Creating an FTP Site
2-Controlling Access to an FTP Site
3-Configuring FTP Site Logging
4-Stopping and Starting FTP Sites
5-Implementing FTP User Isolation
For sake of interest, we''ll again explain these tasks in the context of a fictitious company called TestCorp as it deploys FTP sites for both its corporate intranet and for anonymous users on the Internet.
Preliminary Steps
IIS is not installed by default during a standard installation of Windows Server 2003, and if you installed IIS using Manage Your Server as described in the previous article this installs the WWW service but not the FTP service. So before we can create FTP sites we first have to install the FTP service on our IIS machine. To do this, we need to add an additional component to the Application Server role we assigned our machine when we used Manage Your Server to install IIS.
Begin by opening Add or Remove Programs in Control Panel and selecting Add/Remove Windows Components. Then select the checkbox for Application Server:
Click Details and select the checkbox for Internet Information Services (IIS):
Click Details and select the checkbox for File Transfer Protocol (FTP) Services.
Click OK twice and then Next to install the FTP service. During installation you''ll need to insert your Windows Server 2003 product CD or browse to a network distribution point where the Windows Server 2003 setup files are located. Click Finish when the wizard is done.
Creating an FTP Site
As with web sites, the simplest approach to identifying each FTP site on your machine is to assign each of them a separate IP address, so let''s say that our server has three IP addresses (172.16.11.210, 172.16.11.211 and 172.16.11.212) assigned to it. Our first task will be to create a new FTP site for the Human Resources department, but before we do that let''s first examine the Default FTP Site that was created when we installed the FTP service on our machine. Open IIS Manager in Administrative Tools, select FTP Sites in the console tree, and right-click on Default FTP Site and select Properties:
Just like the Default Web Site, the IP address for the Default FTP Site is set to All Unassigned. This means any IP address not specifically assigned to another FTP site on the machine opens the Default FTP Site instead, so right now opening either ftp://172.16.11.210, ftp://172.16.11.211 or ftp://172.16.11.212 in Internet Explorer will display the contents of the Default FTP Site.
Let''s assign the IP address 172.16.11.210 for the Human Resources FTP site and make D:\HR the folder where its content is located. To create the new FTP site, right-click on the FTP Sites node and select New --> FTP Site. This starts the FTP Site Creation Wizard. Click Next and type a description for the site:
Click Next and specify 172.16.11.210 as the IP address for the new site:
Click Next and select Do not isolate users, since this will be a site that anyone (including guest users) will be free to access:
Click Next and specify C:\HR as the location of the root directory for the site:
Click Next and leave the access permissions set at Read only as this site will only be used for downloading forms for present and prospective employees:
Click Next and then Finish to complete the wizard. The new Human Resources FTP site can now be seen in IIS Manager under the FTP Sites node:
To view the contents of this site, go to a Windows XP desktop on the same network and open the URL ftp://172.16.11.210 using Internet Explorer:
Note in the status bar at the bottom of the IE window that you are connected as an anonymous user. To view all users currently connected to the Human Resources FTP site, right-click on the site in Internet Service Manager and select Properties, then on the FTP Site tab click the Current Sessions button to open the FTP User Sessions dialog:
Note that anonymous users using IE are displayed as IEUser@ under Connected Users.
Now let''s create another FTP site using a script instead of the GUI. We''ll create a site called Help and Support with root directory C:\Support and IP address 172.16.11.211:
Here's the result of running the script:
Controlling Access to an FTP Site
Just like for web sites, there are four ways you can control access to FTP sites on IIS: NTFS Permissions, IIS permissions, IP address restrictions, and authentication method. NTFS permissions are always your first line of defense but we can't cover them in detail here. IIS permissions are specified on the Home Directory tab of your FTP site's properties sheet:
Note that access permissions for FTP sites are much simpler (Read and Write only) than they are for web sites, and by default only Read permission is enabled, which allows users to download files from your FTP site. If you allow Write access, users will be able to upload files to the site as well. And of course access permissions and NTFS permissions combine the same way they do for web sites.
Like web sites, IP address restrictions can be used to allow or deny access to your site by clients that have a specific IP address, an IP address in a range of addresses, or a specific DNS name. These restrictions are configured on the Directory Security tab just as they are for web sites.
FTP sites also have fewer authentication options than web sites, as can be seen by selecting the Security Accounts tab:
By default Allow anonymous connections is selected, and this is fine for public FTP sites on the Internet but for private FTP sites on a corporate intranet you may want to clear this checkbox to prevent anonymous access to your site. Clearing this box has the result that your FTP site uses Basic Authentication instead, and users who try to access the site are presented with an authentication dialog box:
Note that Basic Authentication passes user credentials over the network in clear text so this means FTP sites are inherently insecure (they don't support Windows integrated authentication). So if you're going to deploy a private FTP site on your internal network make sure you close ports 20 and 21 on your firewall to block incoming FTP traffic from external users on the Internet.
Configuring FTP Site Logging
As with web sites, the default logging format for FTP sites is the W3C Extended Log File Format, and FTP site logs are stored in folders named
%SystemRoot%\system32\LogFiles\MSFTPSVCnnnnnnnnnn
where nnnnnnnnnn is the ID number of the FTP site. And just as with web sites, you can use the Microsoft Log Parser, part of the IIS 6.0 Resource Kit Tools, to analyze these FTP site logs.
Stopping and Starting FTP Sites
If an FTP site becomes unavailable you may need to restart it to get it working again, which you can do using IIS Manager by right-clicking on the FTP site and selecting Stop and then Start. From the command-line you can type net stop msftpsvc followed by net start msftpsvc or use iisreset to restart all IIS services. Remember that restarting an FTP site is a last resort as any users currently connected to the site will be disconnected.
Implementing FTP User Isolation
Finally, let's conclude by looking at how to implement the new FTP User Isolation feature of IIS in Windows Server 2003. When an FTP site uses this feature, each user accessing the site has an FTP home directory that is a subdirectory under the root directory for the FTP site, and from the perspective of the user their FTP home directory appears to be the top-level folder of the site. This means users are prevented from viewing the files in other users' FTP home directories, which has the advantage of providing security for each user's files.
Let's create a new FTP site called Staff that makes use of this new feature, using C:\Staff Folders as the root directory for the site and 172.16.11.212 for the site's IP address. Start the FTP Site Creation Wizard as we did previously and step through it until you reach the FTP User Isolation page and select the Isolate users option on this page:
Continue with the wizard and be sure to give users both Read and Write permission so they can upload and download files.
Now let's say you have two users, Bob Smith (bsmith) and Mary Jones (mjones) who have accounts in a domain whose pre-Windows 2000 name is TESTTWO. To give these users FTP home directories on your server, first create a subfolder named \TESTTWO beneath \Staff Folders (your FTP root directory). Then create subfolders \bsmith and \mjones beneath the \Accounts folder. Your folder structure should now look like this:
C:\Staff Folders
\TESTTWO
\bsmith
\mjones
To test FTP User Isolation let's put a file name Bob's Document.doc in the \bsmith subfolder and Mary's Document.doc in the \mjones subfolder. Now go to a Windows XP desktop and open Internet Explorer and try to open ftp://172.16.11.212, which is the URL for the Staff FTP site we just created. When you do this an authentication dialog box appears, and if you're Bob then you can enter your username (using the DOMAIN\username form) and password like this:
When Bob clicks the Log On button the contents of his FTP home directory are displayed:
Note that when you create a new FTP site using FTP User Isolation, you can't convert it to an ordinary FTP site (one that doesn't have FTP User Isolation enabled). Similarly, an ordinary FTP site can't be converted to one using FTP User Isolation.
We still need to explore one more option and that's the third option on the FTP User Isolation page of the FTP Site Creation Wizard, namely Isolate users using Active Directory. Since we've run out of IP addresses let's first delete the Help and Support FTP site to free up 172.16.11.211. One way we can do this is by opening a command prompt and typing iisftp /delete "Help and Support" using the iisftp.vbs command script. Then start the FTP Site Creation Wizard again and select the third option mentioned above (we'll name this new site Management):
Click Next and enter an administrator account in the domain, the password for this account, and the full name of the domain:
Click Next and confirm the password and complete the wizard in the usual way. You'll notice that you weren't prompted to specify a root directory for the new FTP site. This is because when you use this approach each user's FTP home directory is defined by two environment variables: %ftproot% which defines the root directory and can be anywhere including a UNC path to a network share on another machine such as \\test220\docs, and %ftpdir% which can be set to %username% so that for example Bob Smith's FTP home directory would be \\test220\docs\bsmith and this folder would have to be created beforehand for him. You could set these environment variables using a logon script and assign the script using Group Policy, but that's beyond the scope of this present article.
Saturday, April 17, 2010
Three Areas of Information Security Part:3
Understanding Information Security Part:2
Information Security Part:1
Wednesday, April 14, 2010
Tracert
The traceroute utility checks how many "hops" (transfers through other computers on a network) it takes for your computer to contact another computer. You can use traceroute if you know the other computer's IP address, web site address, or name (e.g., 129.79.1.1, www.indiana.edu, or ns.indiana.edu).
To access the utility:
1.Open the command prompt:
◦Windows 7 or Vista: From the Start menu, in the search field, type cmd , and then press Enter.
◦Previous versions: From the Start menu, select Run... . In the "Open:" box, type cmd , and then press Enter.
2.At the command prompt, enter tracert example , where example is the IP address, name, or web site of the computer you are trying to access. For example, if you enter tracert www.indiana.edu , you should see something similar to the following:
Tracing route to www.indiana.edu [129.79.78.8]
over a maximum of 30 hops:
1 <10 ms <10 ms <10 ms 168.91.41.1
2 10 ms 20 ms 20 ms indy-bloomington-s4-6.ivy.tec.in.us [168.91.9.129]
3 10 ms 10 ms 20 ms akicita-lena.ivy.tec.in.us [168.91.1.4]
4 20 ms 30 ms 30 ms indnet.ivy.tec.in.us [168.91.1.130]
5 71 ms 40 ms 50 ms ind-s1-0-7-T1.ind.net [157.91.8.62]
6 80 ms 40 ms 40 ms serverfarm-atm0.ind.net [199.8.76.231]
7 60 ms 90 ms 80 ms iupui-atm6-0-100.ind.net [157.91.9.78]
8 50 ms 40 ms 90 ms indy-dmz.atm.iupui.edu [134.68.15.103]
9 * * * Request timed out.
10 40 ms 70 ms 90 ms wcc6-gw.ucs.indiana.edu [129.79.8.6]
11 * 40 ms 50 ms viator.ucs.indiana.edu [129.79.78.8]
Trace complete.
The first column, the hop count, represents the number of stops your information has made along the route to attempt to contact the other computer. The next three columns are the round-trip times in milliseconds for three different attempts to reach the destination. The last column is the name of the host that responded to the request.
The above example shows that a computer user on ivy.tec.in.us ran a traceroute to www.indiana.edu. On the fifth hop, the request left the Ivy Tech network and went to the ind.net network. On the eighth hop, the request went to the iupui.edu network. Finally, on the tenth hop, the request found its way to the indiana.edu network. Since there is a "Request timed out" message on the ninth hop, you might guess that there could be some problem between the iupui.edu network and the indiana.edu network. If you are seeing other problems, such as the web page at http://www.indiana.edu/ loading slowly, this could indicate the location of the problem.
In many cases, a network technician will need to analyze the problem further. To aid in this effort, you can save the output of the traceroute program as a text file by entering the following command, where example is the IP address, name, or web site you are trying to access:
tracert example > test.txt
You can then send the test.txt file to your computer support provider for further diagnosis.
What is a MOO?
The earliest MOO programming was developed by Stephen White, but the first huge step was the text-based “world” called LambdaMOO, created by Pavel Curtis, who corrected earlier bugs in White’s programs. It first went up in 1990, when most people had only dial up connections to the Internet, and was often accessed through UNIX based servers, through telnet connections. Users could not only talk and chat in various “rooms” together, but could also create their own objects, rooms, characters, and commands using fairly simple programming, called MOO programming language, which then would be added to the total MOO.
At the height of its popularity, Lambda had over 10,000 members, but now this number has dwindled with more user friendly text-based Internet virtual worlds. Unfortunately too, Lambda became primarily associated with Internet flirtations and graphic sexual liaisons. Early servers and too much traffic on MOOs could also create significant “lag” which created impatience and annoyance among users.
A more “friendly” application of MOOs was applied to teach distance learners, or to conduct online forums and classes, since these domains allowed for multiple users to communicate. Other MOOs allowed people of like minds to play scrabble together, or perhaps convene on issues in their profession. Yet others became the new forum for adventure games or to create fantasy worlds like Rupert, which is based on the Douglas Adams book, The Hitchhiker’s Guide to the Galaxy.
MOOs tend to have administrators called wizards, who can expel people from the MOO and might occasionally offer technical assistance. However, newbies were warned to read all help and frequently asked questions (FAQs) before approaching a wizard for help. Some wizards resented intrusions when information to a question could be found elsewhere. Some MOOs also had built in registration limits, but many MOOs like Lambda, allowed people to register as guests. Even if their characters had been expelled, they could come back.
The MOO heyday is primarily over. There are now multiple user online forums that allow for quicker communications, chats, and the like, and even allow for graphic based fantasy worlds instead of those based on text. The charm of MOOs, however, was the individual’s participation in the design. People came together to build “new worlds” of text.
Telnet
Telnet clients are available for all major operating systems.
Command-line telnet clients are built into most versions of Mac OS X, Windows (95 and later), Unix, and Linux. To use them, go to their respective command lines (i.e., the Terminal application in Mac OS X, the shell in Unix or Linux, or the DOS prompt in Windows), and then enter:
telnet host
Replace host with the name of the remote computer to which you wish to connect.
Telnet is a contraction of the two words Telecommunications Network, and is one of the major network protocols used on the Internet. It is one of the earliest network protocols, and one of the only original protocols still in common use on the internet. It was developed in 1969, with the RFC 15, and has evolved over the years to be a robust protocol, although with mounting security concerns it is often foregone in place of the secure SSH protocol.
Unlike the graphical interfaces of the HTTP protocol, which have given us the World Wide Web, telnet is a text-based protocol. The original purpose of telnet was to have an easy interface for terminals to interact with one another, using relatively simple command structures and accessible interfaces. Although still in use, telnet is rarely used by the majority of the internet-browsing public, who instead use HTTP browsers and email clients for the majority of their connections.
In the age before personal computers, anyone who wanted to use a computer generally had to access a terminal that was hooked up to a massive mainframe. Originally, each terminal was hooked up to only one machine, which led to a number of problems. For example, if one person needed to use a number of different machines, each of which specialized in a different task, they would need to physically go to each different terminal to do one job. This could be frustrating if the terminals were located throughout a large building, but was particularly maddening of the mainframe you needed to use was located at an institution in a different city or country from you.
The telnet protocol helped overcome this difficulty. By using a simple suite of commands, users could log in to a distant terminal and ask the mainframe there to undertake whatever processes they needed accomplished. The results would come back to them through telnet, and it was as though they were sitting in front of the terminal itself. In many ways, telnet helped revolutionize the way research was done, and helped build what would eventually become the internet we know today.
Of course, not all of the early uses for telnet were so practical. In fact, one of the ways in which telnet is still used to this day has its roots back in 1978, when a student at Essex University built on the earlier success of terminal games like Adventure and Zork to create a Multi-User Dungeon game, or MUD. These virtual environments, which include other varieties like MUSHes and MOOs, allow multiple people to connect to a terminal via the telnet protocol. Once there, they can play a collective game, often fantasy themed, by inputting text commands and reading the responses and inputs from other players. Although the use of MUDs has diminished with the advent of graphical Massively Multiplayer Online Role-Playing Games (MMORPGs), they still remain a major use of the telnet protocol, with hundreds of thousands of players worldwide.
Although at one point telnet was used widely as a protocol by network administrators and those who needed to deal with their servers, it is rarely used for this purpose anymore. In 1995, a researcher at the Helsinki University of Technology in Finland, fed up with the security holes in telnet which allowed for malicious password sniffing and attacks, built a new protocol to replace it. This protocol, the Secure Shell, or SSH, has most of the same features of telnet, but has much more robust security.
In the early days of the Internet, Telnet was also used to connect with something called a free-net, which is just what it sounds like: an open-access computer system. This was in part because dial up modems were so slow, whereas Telnet worked a lot faster. With the advent of high speed internet providers, however, most free-nets have shut down.
How Telnet Works
Telnet uses software, installed on your computer, to create a connection with the remote host. The Telnet client (software), at your command, will send a request to the Telnet server (remote host). The server will reply asking for a user name and password. If accepted, the Telnet client will establish a connection to the host, thus making your computer a virtual terminal and allowing you complete access to the host's computer.
Telnet requires the use of a user name and password, which means you need to have previously set up an account on the remote computer. In some cases, however, computers with Telnet will allow guests to log on with restricted access.
Does my Computer Have Telnet?
Every major computer operating system, including Unix, Linux, Mac OS and Windows, has Telnet capabilities and may even have Telnet built into them. To find out, open the command box in your system (for Windows, that would be the "run" function in the Start menu that opens the DOS prompt) and enter the command: TELNET HOST, with HOST being the name of the remote host computer with which you would like to connect.
Interestingly, Windows Vista does not automatically run Telnet. To run Telnet on Vista, you must activate the application by going to your Start menu, click on the "Control Panel," click on "Programs," and choose "Turn Windows features on or off." A dialog box will appear, and you should see Telnet Client listed, with a box next to it. Click on the box to select Telnet, then click "OK" and wait until installation is complete.
When you want to exit the Telnet application, you need to run the command prompt again on your own computer. Different operating systems use different commands to exit, such as QUIT, CLOSE and LOGOFF. Windows uses LOGOFF. If none of the commands work, you can try ABORT; however, this command serves only to end Telnet on your end, sometimes leaving it running on the remote host computer, so use ABORT only as your last option.
How to Connect to a Telnet Server
Telnet is a program that allows to connect to and communicate with a remote server, sometimes referred to as a telnet server. You can execute the full range of commands on the remote sever using a telnet connection on your local computer. Telnet does not encrypt information (for example, passwords) sent over, and hence such a connection is not secure.
How to Set Up a Telnet Server
Telnet is a text-based program that allows you to connect to other computers remotely. In Windows Vista and 7, this program is available as part of the operating system, but not installed. To setup the Telnet server you must install the program and establish the group of users. You can then open the Telnet window and manually enter commands. The process is quite complex, but Telnet allows you to work on another computer remotely, just as if you were sitting right in front of it.
Monday, April 12, 2010
Wire Shark
Wireshark is cross-platform, using the GTK+ widget toolkit to implement its user interface, and using pcap to capture packets; it runs on various Unix-like operating systems including Linux, Mac OS X, BSD, and Solaris, and on Microsoft Windows. Released under the terms of the GNU General Public License, Wireshark is free software.
Functionality
Wireshark is very similar to tcpdump, but it has a graphical front-end, and many more information sorting and filtering options. It allows the user to see all traffic being passed over the network (usually an Ethernet network but support is being added for others) by putting the network interface into promiscuous mode.
What is Wireshark?
Wireshark is a network packet analyzer. A network packet analyzer will try to capture network packets and tries to display that packet data as detailed as possible.
You could think of a network packet analyzer as a measuring device used to examine what's going on inside a network cable, just like a voltmeter is used by an electrician to examine what's going on inside an electric cable (but at a higher level, of course).
In the past, such tools were either very expensive, proprietary, or both. However, with the advent of Wireshark, all that has changed.
Wireshark is perhaps one of the best open source packet analyzers available today.
Some intended purposes
Here are some examples people use Wireshark for:
network administrators use it to troubleshoot network problems
network security engineers use it to examine security problems
developers use it to debug protocol implementations
people use it to learn network protocol internals
Beside these examples, Wireshark can be helpful in many other situations too.
Features
The following are some of the many features Wireshark provides:
Available for UNIX and Windows.
Capture live packet data from a network interface.
Display packets with very detailed protocol information.
Open and Save packet data captured.
Import and Export packet data from and to a lot of other capture programs.
Filter packets on many criteria.
Search for packets on many criteria.
Colorize packet display based on filters.
Create various statistics.
... and a lot more!
However, to really appreciate its power, you have to start using it.
Wireshark captures packets and allows you to examine their content.
What Wireshark is not
Here are some things Wireshark does not provide:
Wireshark isn't an intrusion detection system. It will not warn you when someone does strange things on your network that he/she isn't allowed to do. However, if strange things happen, Wireshark might help you figure out what is really going on.
Wireshark will not manipulate things on the network, it will only "measure" things from it. Wireshark doesn't send packets on the network or do other active things (except for name resolutions, but even that can be disabled).
TCPdump
tcpdump is free software.
Tcpdump works on most Unix-like operating systems: Linux, Solaris, BSD, Mac OS X, HP-UX and AIX among others. In those systems, tcpdump uses the libpcap library to capture packets.
There is also a port of tcpdump for Windows called WinDump; this uses WinPcap, which is a port of libpcap(Packet Capture) to Windows.
In some Unix-like operating systems, a user must have superuser(is a special user account used for system administration) privileges to use tcpdump because the packet capturing mechanisms on those systems require elevated privileges. However, the -Z option may be used to drop privileges to a specific unprivileged user after capturing has been set up. In other Unix-like operating systems, the packet capturing mechanism can be configured to allow non-privileged users to use it; if that is done, superuser privileges are not required.
The user may optionally apply a BPF-based(permitting raw link-layer packets to be sent and received) filter to limit the number of packets seen by tcpdump; this renders the output more usable on networks with a high volume of traffic.
Common uses of tcpdump
Tcpdump is frequently used to debug applications that generate or receive network traffic. It can also be used for debugging the network setup itself, by determining whether all necessary routing is occurring properly, allowing the user to further isolate the source of a problem.
It is also possible to use tcpdump for the specific purpose of intercepting and displaying the communications of another user or computer. A user with the necessary privileges on a system acting as a router or gateway through which unencrypted traffic such as TELNET or HTTP passes can use tcpdump to view login IDs, passwords, the URLs and content of websites being viewed, or any other unencrypted information.
When tcpdump finishes capturing packets, it will report counts of:
packets ``captured'' (this is the number of packets that tcpdump has received and processed);
packets ``received by filter'' (the meaning of this depends on the OS on which you're running tcpdump, and possibly on the way the OS was configured - if a filter was specified on the command line, on some OSes it counts packets regardless of whether they were matched by the filter expression and, even if they were matched by the filter expression, regardless of whether tcpdump has read and processed them yet, on other OSes it counts only packets that were matched by the filter expression regardless of whether tcpdump has read and processed them yet, and on other OSes it counts only packets that were matched by the filter expression and were processed by tcpdump);
packets ``dropped by kernel'' (this is the number of packets that were dropped, due to a lack of buffer space, by the packet capture mechanism in the OS on which tcpdump is running, if the OS reports that information to applications; if not, it will be reported as 0).
OPTIONS
-A
Print each packet (minus its link level header) in ASCII. Handy for capturing web pages.
-B
Set the operating system capture buffer size to buffer_size.
-c
Exit after receiving count packets.
-C
Before writing a raw packet to a savefile, check whether the file is currently larger than file_size and, if so, close the current savefile and open a new one. Savefiles after the first savefile will have the name specified with the -w flag, with a number after it, starting at 1 and continuing upward. The units of file_size are millions of bytes (1,000,000 bytes, not 1,048,576 bytes).
-d
Dump the compiled packet-matching code in a human readable form to standard output and stop.
-dd
Dump packet-matching code as a C program fragment.
-ddd
Dump packet-matching code as decimal numbers (preceded with a count).
-D
Print the list of the network interfaces available on the system and on which tcpdump can capture packets. For each network interface, a number and an interface name, possibly followed by a text description of the interface, is printed. The interface name or the number can be supplied to the -i flag to specify an interface on which to capture.
This can be useful on systems that don't have a command to list them (e.g., Windows systems, or UNIX systems lacking ifconfig -a); the number can be useful on Windows 2000 and later systems, where the interface name is a somewhat complex string.
The -D flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_findalldevs() function.
-e
Print the link-level header on each dump line.
-E
Use spi@ipaddr algo:secret for decrypting IPsec ESP packets that are addressed to addr and contain Security Parameter Index value spi. This combination may be repeated with comma or newline seperation.
Note that setting the secret for IPv4 ESP packets is supported at this time.
Algorithms may be des-cbc, 3des-cbc, blowfish-cbc, rc3-cbc, cast128-cbc, or none. The default is des-cbc. The ability to decrypt packets is only present if tcpdump was compiled with cryptography enabled.
secret is the ASCII text for ESP secret key. If preceeded by 0x, then a hex value will be read.
The option assumes RFC2406 ESP, not RFC1827 ESP. The option is only for debugging purposes, and the use of this option with a true `secret' key is discouraged. By presenting IPsec secret key onto command line you make it visible to others, via ps(1) and other occasions.
In addition to the above syntax, the syntax file name may be used to have tcpdump read the provided file in. The file is opened upon receiving the first ESP packet, so any special permissions that tcpdump may have been given should already have been given up.
-f
Print `foreign' IPv4 addresses numerically rather than symbolically (this option is intended to get around serious brain damage in Sun's NIS server --- usually it hangs forever translating non-local internet numbers).
The test for `foreign' IPv4 addresses is done using the IPv4 address and netmask of the interface on which capture is being done. If that address or netmask are not available, available, either because the interface on which capture is being done has no address or netmask or because the capture is being done on the Linux "any" interface, which can capture on more than one interface, this option will not work correctly.
-F
Use file as input for the filter expression. An additional expression given on the command line is ignored.
-G
If specified, rotates the dump file specified with the -w option every rotate_seconds seconds. Savefiles will have the name specified by -w which should include a time format as defined by strftime(3). If no time format is specified, each new file will overwrite the previous.
If used in conjunction with the -C option, filenames will take the form of `file
-i
Listen on interface. If unspecified, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback). Ties are broken by choosing the earliest match.
On Linux systems with 2.2 or later kernels, an interface argument of ``any'' can be used to capture packets from all interfaces. Note that captures on the ``any'' device will not be done in promiscuous mode.
If the -D flag is supported, an interface number as printed by that flag can be used as the interface argument.
-I
Put the interface in "monitor mode"; this is supported only on IEEE 802.11 Wi-Fi interfaces, and supported only on some operating systems.
Note that in monitor mode the adapter might disassociate from the network with which it's associated, so that you will not be able to use any wireless networks with that adapter. This could prevent accessing files on a network server, or resolving host names or network addresses, if you are capturing in monitor mode and are not connected to another network with another adapter.
This flag will affect the output of the -L flag. If -I isn't specified, only those link-layer types available when not in monitor mode will be shown; if -I is specified, only those link-layer types available when in monitor mode will be shown.
-K
Don't attempt to verify IP, TCP, or UDP checksums. This is useful for interfaces that perform some or all of those checksum calculation in hardware; otherwise, all outgoing TCP checksums will be flagged as bad.
-l
Make stdout line buffered. Useful if you want to see the data while capturing it. E.g.,
``tcpdump -l | tee dat'' or ``tcpdump -l > dat & tail -f dat''.
-L
List the known data link types for the interface, in the specified mode, and exit. The list of known data link types may be dependent on the specified mode; for example, on some platforms, a Wi-Fi interface might support one set of data link types when not in monitor mode (for example, it might support only fake Ethernet headers, or might support 802.11 headers but not support 802.11 headers with radio information) and another set of data link types when in monitor mode (for example, it might support 802.11 headers, or 802.11 headers with radio information, only in monitor mode).
-m
Load SMI MIB module definitions from file module. This option can be used several times to load several MIB modules into tcpdump.
-M
Use secret as a shared secret for validating the digests found in TCP segments with the TCP-MD5 option (RFC 2385), if present.
-n
Don't convert addresses (i.e., host addresses, port numbers, etc.) to names.
-N
Don't print domain name qualification of host names. E.g., if you give this flag then tcpdump will print ``nic'' instead of ``nic.ddn.mil''.
-O
Do not run the packet-matching code optimizer. This is useful only if you suspect a bug in the optimizer.
-p
Don't put the interface into promiscuous mode. Note that the interface might be in promiscuous mode for some other reason; hence, `-p' cannot be used as an abbreviation for `ether host {local-hw-addr} or ether broadcast'.
-q
Quick (quiet?) output. Print less protocol information so output lines are shorter.
-R
Assume ESP/AH packets to be based on old specification (RFC1825 to RFC1829). If specified, tcpdump will not print replay prevention field. Since there is no protocol version field in ESP/AH specification, tcpdump cannot deduce the version of ESP/AH protocol.
-r
Read packets from file (which was created with the -w option). Standard input is used if file is ``-''.
-S
Print absolute, rather than relative, TCP sequence numbers.
-s
Snarf snaplen bytes of data from each packet rather than the default of 65535 bytes. Packets truncated because of a limited snapshot are indicated in the output with ``[|proto]'', where proto is the name of the protocol level at which the truncation has occurred. Note that taking larger snapshots both increases the amount of time it takes to process packets and, effectively, decreases the amount of packet buffering. This may cause packets to be lost. You should limit snaplen to the smallest number that will capture the protocol information you're interested in. Setting snaplen to 0 sets it to the default of 65535, for backwards compatibility with recent older versions of tcpdump.
-T
Force packets selected by "expression" to be interpreted the specified type. Currently known types are aodv (Ad-hoc On-demand Distance Vector protocol), cnfp (Cisco NetFlow protocol), rpc (Remote Procedure Call), rtp (Real-Time Applications protocol), rtcp (Real-Time Applications control protocol), snmp (Simple Network Management Protocol), tftp (Trivial File Transfer Protocol), vat (Visual Audio Tool), and wb (distributed White Board).
-t
Don't print a timestamp on each dump line.
-tt
Print an unformatted timestamp on each dump line.
-ttt
Print a delta (micro-second resolution) between current and previous line on each dump line.
-tttt
Print a timestamp in default format proceeded by date on each dump line.
-ttttt
Print a delta (micro-second resolution) between current and first line on each dump line.
-u
Print undecoded NFS handles.
-U
Make output saved via the -w option ``packet-buffered''; i.e., as each packet is saved, it will be written to the output file, rather than being written only when the output buffer fills.
The -U flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_dump_flush() function.
-v
When parsing and printing, produce (slightly more) verbose output. For example, the time to live, identification, total length and options in an IP packet are printed. Also enables additional packet integrity checks such as verifying the IP and ICMP header checksum.
When writing to a file with the -w option, report, every 10 seconds, the number of packets captured.
-vv
Even more verbose output. For example, additional fields are printed from NFS reply packets, and SMB packets are fully decoded.
-vvv
Even more verbose output. For example, telnet SB ... SE options are printed in full. With -X Telnet options are printed in hex as well.
-w
Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is ``-''. See pcap-savefile(5) for a description of the file format.
-W
Used in conjunction with the -C option, this will limit the number of files created to the specified number, and begin overwriting files from the beginning, thus creating a 'rotating' buffer. In addition, it will name the files with enough leading 0s to support the maximum number of files, allowing them to sort correctly.
Used in conjunction with the -G option, this will limit the number of rotated dump files that get created, exiting with status 0 when reaching the limit. If used with -C as well, the behavior will result in cyclical files per timeslice.
-x
When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex. The smaller of the entire packet or snaplen bytes will be printed. Note that this is the entire link-layer packet, so for link layers that pad (e.g. Ethernet), the padding bytes will also be printed when the higher layer packet is shorter than the required padding.
-xx
When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex.
-X
When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex and ASCII. This is very handy for analysing new protocols.
-XX
When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex and ASCII.
-y
Set the data link type to use while capturing packets to datalinktype.
-z
Used in conjunction with the -C or -G options, this will make tcpdump run " command file " where file is the savefile being closed after each rotation. For example, specifying -z gzip or -z bzip2 will compress each savefile using gzip or bzip2.
Note that tcpdump will run the command in parallel to the capture, using the lowest priority so that this doesn't disturb the capture process.
And in case you would like to use a command that itself takes flags or different arguments, you can always write a shell script that will take the savefile name as the only argument, make the flags & arguments arrangements and execute the command that you want.
-Z
Drops privileges (if root) and changes user ID to user and the group ID to the primary group of user.
This behavior can also be enabled by default at compile time.
expression
selects which packets will be dumped. If no expression is given, all packets on the net will be dumped. Otherwise, only packets for which expression is `true' will be dumped.
Sunday, April 11, 2010
ای ای اس وب سرور مایکروسافت می باشد و برای ایجاد،مدیریت ،و هاستینگ وب سایتها مورد استفاده
توضیحاتی در مورد جزییات ای ای اس
توانایی دانلود را به سیت شما اضافه میکند:FTP(File Transfer Protocol)
توانایی فرستادن و یا دریافت ایمیل را فراهم می کند :SMTP Service
Saturday, April 10, 2010
Internet Information Service (IIS) فارسی
ای ای اس
برای ارایه دو سرویس وب و اف تی پی است
اف تی پی
برای به اشتراک گذاشتن
وبها است و برای آپلود کردن و دانلود کردن است
در حالت اول باید اسم فولدر به اشتراک گذاشته شده را نیز بدانیم ولی در اف تی پی نیازی نیست و مثلا اپلود کردن در این حالت سخت تر است چون هم باید ای پی را بدانیم و هم اسم فایل به اشتراک گذاشته شده راو اگر در سمتی که سرویس وب دریافت می کنیم برنامه مورد نظر برای باز کردن فایل به اشتراک گذاشته را نداشته باشیم در این صورت فایل را نمی توان باز کرد
ورژن های ای ای اس
Win2k(Win2000) IIS 5.0
WinXP IIS 5.5
Win2k3(Win2003) IIS 6.0
Win2k8(Win2008)& Win Vista IIS 7.0
پارامترهای داخل ای ای اس
Application Pool
Web Sites
Web Service Extension
Web Site
یک وب سایت بطور پیش فرض در داخل فولدر وب سایت وجود داردبرای ساختن وب سایت جدید بر روی وب سایت رایت کلیک می کنیم
New/Web Site/Next/Description/IP Address/TCP Port,Host Header/Path/web site access permission(Read,...)/Finish.
در جایی که مسیر می دهیم معین می کنیم که
فایل وب سایت را از کجا بخواند
PHP,Asp,Html
زبانهای برنامه نویسی برای طراحی وب سایت است
امکان وجود چند وب سایت بر روی یک وب سرور نیست واگر هم وجود داشته باشد فقط یک کدام از وب سایتها فعال است و بقیه غیر فعال هستند
راه حل برای داشتن چندین وب سایت
وب سرور دارای چندین ای پی است و به هر وب سایت یک ای پی اختصاص می دهد. از پورتهای جداگانه استفاده می کنیم .اگر غیر از پورت 80 بخواهیم استفاده کنیم باید اینطور وارد کنیم
شمارهIP :Port Number
هدر
یکی دیگر از راه حل های مشکل است که یکی از پارامترهای اچ تی تی پی است
مجوزهای وب سرور
Read
کاربران می توانند بخوانند ولی به منابع دسترسی ندارند و نمی توانند تغییر دهند
Write
در ابتدای راه اندازی سایت وجود دارد برای عیب یابی است و به منابع دسترسی دارد
اگر یک سری متن نوشتیم تا زمانیکه کاربر وارد سایت شد این متن اجرا شود باید مجوز
Run Script (Such as Asp)
را بدهیم
Execute
مجوز اجرا شدن مربوط به فایلهای اجرایی است
Web Binding
به اختصاص دادن ای پی و هدر و شماره پورت به وب سایت گفته می شود
Sunday, April 4, 2010
Internet Information Services (IIS)
Windows Server 2003 Service Pack 1 includes Internet Information Services (IIS), Version 6.0, which makes it possible for you to host your own Web site on the Internet or your intranet.
IIS is an optional component of Windows Server 2003, is not enabled by default, and must be installed separately.
Who does this feature apply to?
This feature applies to the following audiences:
• IT professionals that use IIS to host and administer a Web site.
• Web developers that use IIS to develop Web content.
Internet Information Services 6 (IIS 6) is a powerful platform for hosting web sites on both the public Internet and on private intranets. Creating and configuring web sites and virtual directories are bread-and-butter tasks for IIS Administrators, and in this article we'll walk through the process of doing this using both the GUI (IIS Manager) and using various scripts included with Windows Server 2003. The seven specific tasks we'll walk through will include:
Creating a Web Site
Controlling Access to a Web Site
Configuring Web Site Logging
Configuring Web Site Redirection
Stopping and Starting Web Sites
For sake of interest, we'll explain these tasks in the context of a fictitious company called TestCorp as it deploys IIS for its corporate intranet.
Preliminary Steps
Unlike earlier versions of Microsoft Windows, IIS is not installed by default on Windows Server 2003. To install IIS, open Manage Your Server from the Start menu and add the Application Server role:
Note that for simple security reasons IIS should only be installed on member servers, not domain controllers. The reason is that if you install IIS on a domain controller and your web server becomes compromised, the attacker could gain access to your accounts database and wreak havoc with your network.
Creating a Web Site
The simplest approach is to use a separate IP address to identify each web site on your machine. Let's say our server has five IP addresses assigned to it from the range 172.16.11.220 through 172.16.11.224. Before we create a new Human Resources web site, let's first examine the identify of the Default Web Site. Open IIS Manager in Administrative Tools, select Web Sites in the console tree, and right-click on Default Web Site and open it's properties:
The IP address for the Default Web Site is All Unassigned. This means any IP address not specifically assigned to another web site on the machine opens the Default Web Site instead. A typical use for the Default Web Site is to edit it's default document to display general information like a company logo and how to contact the Support Desk.
Let's use IP address 172.16.11.221 for the Human Resources site and make D:\HR the folder where the home page for this site is stored. To create the HR site, right-click on the Web Sites node and select New --> Web Site. This starts the Web Site Creation Wizard. Click Next and type a description for the site:
Click Next again and specify 172.16.11.221 as the IP address for the site:
Click Next and specify D:\HR as the home folder for the site. We've cleared the checkbox to deny anonymous access to the site because this is an internal intranet so only authenticated users should be able to access it (public web sites generally allow anonymous access):
Click Next and leave only Read access enabled since the Human Resources site will initially only be used to inform employees of company policies:
Click Next and then Finish to create the new web site:
Now let's create another intranet site, this time for Help Desk, which will use IP address 172.16.11.222 and home folder D:\Help. We'll create this one using a script instead of the GUI:
And here's the result:
The script we used here is Iisweb.vbs, one of several IIS administration scripts available when you install IIS on Windows Server 2003. Note that unlike the Web Site Creation Wizard used previously. you can't use this script create a web site with anonymous access disabled. So if you want to disable anonymous access you should do it by opening the properties sheet for the Help Desk site, selecting the Directory Security tab, and clicking the Edit button under Authentication and Access Control. This opens the Authentication Methods box where you can clear the checkbox to disable Anonymous Access and leave Windows Integrated Authentication as the only authentication method available for clients on your network:
Controlling Access to a Web Site
First let's look at how we can control access to our web sites. There are basically four ways you can do this: NTFS Permissions, web permissions, IP address restrictions, and authentication method. NTFS permissions is your front line of defense but it's a general subject that we can't cover in detail here. Web permissions are specified on the Home Directory tab of your web site's properties:
By default only Read permission is enabled, but you can also allow Write access so users can upload or modify files on your site.
Script source access so users can view the code in your scripts (generally not a good idea), or Directory browsing so users can view a list of files in your site (also not a good idea). Web permissions apply equally to all users trying to access your site, and they are applied before NTFS permissions are applied. So if Read web permission is denied but NTFS Read permission is allowed, users are denied access to the site.
IP address restrictions can be used to allow or deny access to your site by clients that have a specific IP address, have an IP address within a range of addresses, or have a specific DNS domain name. To configure this, select the Directory Security tab and click the Edit button under IP Address and Domain Name Restrictions. This opens the following dialog, which by default does not restrict access to your site:
The main thing to watch for here is that denying access based on domain name involves reverse DNS lookups each time clients try to connect to your web site, and this can significantly impact the performance of your site.
The final way of controlling access to your sites is to use the Authentication Methods dialog box we looked at previously:
In summary, the five authentication options displayed here are:
*Anonymous access. Used mainly for web sites on public (Internet) web servers.
*Integrated Windows authentication. Used mainly for web sites on a private intranet.
*Digest authentication. Challenge/response authentication scheme that only works with clients running Internet Explorer 5.0 or later.
*Basic authentication. Older authentication scheme that transmits passwords over the network in clear text, so use this only in conjunction with SSL.
*.NET Passport authentication. Allows users to use their .NET Passport for authentication.
Configuring Web Site Logging
Since web sites are prime targets for attackers, you probably want to log hits to your site to see who's visiting it. By default IIS 6 logs traffic to all content as can be seen on the bottom of the General tab of the properties for a web site or virtual directory:
The default logging format is the W3C Extended Log File Format, and clicking Properties indicates new log files are created daily in the indicated directory. It's a good idea to specify that local time be used for logging traffic as this makes it easier to interpret the logs:
Configuring Web Site Redirection
Sometimes you need to take your web site down for maintenance, and in such cases it's a good idea to redirect all client traffic directed to your site to an alternate site or page informing users what's going on. IIS lets you redirect a web site to a different file or folder on the same or another web site or even to an URL on the Internet. To configure redirection you use the Home Directory tab and choose the redirection option you want to use:
Stopping and Starting Web Sites
Finally, if sites become available you may need to restart IIS to get them working again. Restarting IIS is a last resort as any users currently connected will be disconnected and any data stored in memory by IIS applications will be lost. You can restart IIS using IIS Manager by right-clicking on the server node:
You can also do the same from the command-line using the Iisreset command:
Type iisreset /? for the full syntax of this command. You can also start and stop individual web sites using IIS Manager or the Iisweb.vbs script. And you can stop or start individual IIS services using the net commands, for example net stop w3svc will stop the WWW services only.
Summary
In this article I've explained how to create and configure web sites on IIS 6. Most of what we've covered also applies to IIS 5 on Windows 2000 as well.
Labels
- Accountability (1)
- Certificate (1)
- Command-line (2)
- computers (1)
- Default (2)
- Directory Service (6)
- Dsadd (1)
- Dsget (1)
- Groups (4)
- IIS (1)
- Industry (1)
- Infromation (2)
- Installation (2)
- Internet Information Service (1)
- Local (1)
- Local users (2)
- Manage (1)
- Management (2)
- Managing (1)
- Operational (2)
- Physical (2)
- security (7)
- server (3)
- Server roles (3)
- Unattended (1)
- Users (3)
- web server (1)
- Windows Interface (3)
- Windows server 2003 (5)