For example, by default most operating systems will install UUCP utilities (which in the days of networking via phone lines were necessary). These utilities are for the most part no longer ever used, run with elevated privileges, and are non-trivial to configure correctly. Many systems have been compromised because of incorrect UUCP configurations, and I always recommend that unless those tools are required on a system, they should not be installed.
Another example would be SNMP tools. Vendors love to show off "remote" reporting facilities which use SNMP, but if configured incorrectly, SNMP can be used by unauthorized parties to map out network relationships that you may not want outside parties knowing about. SNMP is not trivial to configure, and hardly ever required. It's another that I recommend not even installing, unless you're prepared to take the time to configure it carefully.
There are, of course other similar types of software, but without knowing how the machine will be used, it's difficult to give an exhaustive list of what should be avoided. In general, avoid installing software that won't be required for the general operation of the system.
Also, if you have the option, it's preferable to mount file-systems that can be written to by unprivileged users (such as the /tmp file-system), such that executables placed within those directories cannot be run with setuid/setgid privileges. Depending on the file-system and on how the system itself is being used, it might even make sense to mount so files within it cannot be executed at all. (this might make sense with /tmp on some systems, or other file-systems on others. It's an option which should be considered).
Make sure to use the "sticky" bit on world-writable directories. This will prevent users deleting (or overwriting) files which they do not own within the directory.
Also, on many systems, when a file is created within a directory, it inherits the group ownership that owns the directory. For example, that attribute resulted in a couple of instances (on a system I was adminning) of a non-privileged user being able to (accidentally, in this case) create executable files which were setgid to group "system" in the /tmp directory. Had the user been malicious, he might have used that to attempt to further elevate his privileges. /tmp on all systems I manage is now owned by group "nobody", an unprivileged group.
From those files which need the elevated privilege (the exact list will vary from system to system), you'll want to remove read permission from the binary. This way, should something go wrong while that program is running, it won't be able to drop a core file, (core dumps on many systems follow symbolic links, so for example, if I put a symlink in my current working directory, called "core" and pointing to /etc/passwd, then cause the "su" program to dump core, I might overwrite the system's password file).
In some instances, it might also make sense to make the setuid binary executable only to human users, via a "users" or similar group. There really isn't much reason for an otherwise unprivileged user such as "www" (for example) to be able to run most binaries, especially those which might be used to elevate privileges.
There are many programs which will require setuid (or setgid) permission. A brief list (off the top of my head) might include:
There are most certainly others, but this should give you an idea of what sorts of programs which you'll need to leave with elevated privileges. Remember that it's very unlikely that non-human users on your system would need to run any of those, so it makes sense to make them executable only to a group that contains only human users (usually "users").
Using such a mechanism helps reduce the possibility that passwords can be "guessed" (using password cracking software). The system password file contains lots of information about each user, and must be readable to all users on the system in order for some programs (some as simple as "ls") to work correctly. As such anyone with access to an account on the system can obtain a copy of the system password file. If that file contains the encrypted passwords, given the computing power available in the average personal computer these days, it has become rather trivial to find users' passwords.
In addition to using some form of shadow password mechanism, you should use a password changing program which enforces a policy regarding selecting difficult to "guess" passwords. This way, even given encrypted passwords and a fast computer, brute-force guessing of passwords becomes less than trivial. Given sufficient time and resources, mind you, it could still be done, so you want your password policy to make it as difficult as possible, while still permitting users to create easy to remember passwords.
Some sites and some system administrators believe that passwords can be protected by forcing users to change them frequently. Although there is merit to encouraging users to change their passwords regularly, my own feeling is that forcing them to do so can potentially work against their protection. I've written a document which explains some of the issues involved, and some approaches we use where I work to address these issues. No single policy will work for every site, though. What's important is that the problem of password protection be carefully considered and a policy adopted that addresses these concerns in a suitable manner for an individual site.
Remember: if users can't remember their passwords, or they're forced to change them too frequently, they'll write them down, and that requires very little computing resource to decrypt.
Certain services work in your favour, of course, such as an "ident" daemon, which can help you trace a user who may have done something causing the sysadmin of another system to complain to you, (why did user@yourhost try to telnet to ...), or perhaps to increase the granularity of certain network access controls (we will accept ssh connections from only user@myhost, rather than all users on myhost).
Also check that there aren't any stand-alone daemons that you don't need running that accept network connections from the outside. Check configuration files, and adjust them where necessary.
For example, rshd and rlogind both check the /etc/hosts.equiv file, and the target user's .rhosts file. You may need to enable that service (if you do, I strongly recommend NOT using /etc/hosts.equiv for authentication, unless you are the system administrator for all systems listed there, and you know that all your userids match on all systems), but some of your users may not be aware of how to set up their .rhosts files intelligently, and may attempt to use "+" signs to allow all users on a machine access to their account, (or all users on any machine, or a specific user on any machine, etc.)
For various reasons, this is a bad idea, and many Unix systems now permit the system administrator to include the string "NO_PLUS" in the hosts.equiv file to turn off acceptance of "+" signs in the .rhosts files. If you must permit rsh/rlogin access and your system supports that, use it.
Know that with Rsh-type services, (including Ssh), if you trust a host, (or an account), you are also trusting all other hosts (or accounts) that it trusts. Understand your trust relationships, and review them periodically.
You also want to have various monitoring scripts which run daily and give status reports on different aspects of your systems, such as disk space availability, active network ports, new setuid/setgid files, new world-writable directories, etc. The actual requirements of each system vary, but the point is that you want to collect the information regularly in a format that's easy to read.
If you can, at all, you want to store the data from integrity checking programs on a separate, very tightly controlled system, so that should the system being monitored get compromised, the integrity data cannot be modified to cover up the compromise.
Since what you're protecting is generally system availability and data integrity, it only makes sense to have good backups, which you can turn to in the event of disaster. Make sure to test your backup program thoroughly, by performing (perhaps random) restores periodically, and comparing the restored file with the original. Backups are only as good as your ability to restore data from them, so test that regularly.
Backups affect how you look at other aspects of the system as well, such as file-system layout (perhaps you don't really need to backup /tmp, so it should be on its own file-system, and you can save tape and time by not backing it up, or perhaps since /usr is not likely to change all that frequently, and therefore not likely to need to be restored frequently, it would make sense to have it on its own file-system, backed up near the end of the backup cycle).
Where backup tapes are stored, is of course a concern with respect to system security. If an intruder can easily get at backup tapes, they may not need to use network methods to gain access to privileged data contained on a system. On the other hand, if an authorized user needs a file restored, but the tapes must be flown in from some remote island in the middle of the Pacific ocean, it might not be the best situation to be in either. (unless of course you happen to be on a remote island in the middle of the Pacific Ocean...)
This is by no means meant to be an exhaustive list, but it is my hope that it can help people know what to look at when they consider securing a Unix or Linux system. Certainly if all the issues covered in this list are considered when setting the system up, the system will be in a pretty good state, and the system administrator will be in a good position to know when a change occurs on the system.
I hope this will be considered useful by many.