By Deb Shinder | January 4, 2008, 7:52 AM PST
Unified communications (UC) makes it easy to get your messages no matter where you are. But is there a dark side to all this convenience? Will UC also make it easier for spammers to find and target you with their advertising messages?
By Chad Perrin | December 31, 2007, 8:13 PM PST
Security professionals everywhere, myself included, might want to think long and hard about why the best security essay of 2007 wasn't even about security. It is a late entry to the running for 2007, published on a personal weblog on the 14th of December. The author, Ben Orenstein, is a software developer, and the essay is titled On the fundamentals of programming. While the content of the essay never references security, even obliquely, all the principles touched on relate very well to security matters. As both a security professional and a programmer, myself, I believe I have a leg to stand on when I say this is probably the best essay of 2007 not only about security, but about its intended subject -- programming -- as well. As Ben Orenstein put it: To become a better programmer, one should practice like a musician. The key is, as he observes, that one learns best and most completely by doing -- not merely by reading and listening, and buying the most expensive toys. That applies to all fields of endeavor, including IT security. This message holds particular interest for me, not only because I'm both an IT security pundit these days (writing for this weblog) and a programmer, but a relatively recent musician. I finally graduated from a long-time loaner Samick bass to a brand new Ibanez Soundgear bass of my own. It is only in retrospect that I realize I have learned about IT security primarily by doing in gradually increasing difficulty of the task. The framework doesn't really exist for a proper iterative progression of tasks in IT security as it does for music, or even for programming if you look hard enough for it. In music, repetition of simple patterns (as I'm finding out first-hand, for the second time in my life) is enough to teach fundamental principles to the beginner. All it really takes is a basic ability to recognize patterns and a lot of practice, which generally takes the form of practicing scales or chords. Some good examples of similar practice patterns for programming show up in Ben Orenstein's essay, and in the comments that follow it. Where do you find the same thing for IT security -- or security even more in general than that? One can take a very unstructured approach, of course, in the form of simple personal privacy management, malware defense, firewall configuration, and all the other basics of personal security. Such practice, however, tends to take the form of learning how to use the currently available tools to provide the currently best understood security practices. It takes a better capacity for recognizing patterns, and a lot more practice, to sort out the principles that form the foundation of your practice, and such an approach tends to leave significant holes in one's understanding of the basic principles of security. How long could someone configure mandatory access controls and email encryption tools before one might arrive at the same conclusion as Auguste Kerckhoffs and Claude Shannon -- that security through obscurity is not security at all? It could take years. In fact, you may never learn that lesson, as proven by a significant percentage of the people doing professional security work in the world today. If I knew of a better way to learn by doing in the field of security, I'd share it with you, though. I'm just not sure how one would go about getting a clearer view of the underlying principles of security through practice than starting with the small tasks of security and working your way up in such an ad-hoc fashion. Formal instruction in security, the sort of thing you get from security certification courses and instructional seminars, hands you concepts on a plate. It doesn't really give you the kind of deep understanding of concepts you get from practice. Where do we go from here? What are the scales and chords of IT security? If you can figure it out, let me know.
By Chad Perrin | December 29, 2007, 6:28 PM PST
I'll close out the holiday season for the IT Security weblog here at TechRepublic by presenting one of the most
annoyingamusing pieces of security culture to come out of 2007. Without further ado, I present PGP Corporation's new security awareness Christmas jingle, The 12 Threats of Christmas:
By Chad Perrin | December 18, 2007, 5:15 PM PST
At 11 and 11:30 PM this Christmas, a new show called Tiger Team will air on Court TV. It follows the activities of a penetration testing IT security consulting team as they test security policies on client networks. There's already a Wikipedia article for the show. The blurb on courttv.com says:
This vÃ©ritÃ© action series follows Tiger Team â€“ a group of elite professionals hired to infiltrate major business and corporate interests with the objective of exposing weaknesses in the worldâ€™s most sophisticated security systems, defeating criminals at their own game. Tiger Team is comprised of Security Audit Specialists Chris Nickerson, Luke McOmie and Ryan Jones who employ a variety of covert techniques â€“ electronic, psychological and tactical - as they take on a new assignment in each episode.Court TV seems like kind of an odd place for a series following the work of a penetration testing team, but I'm not looking this gift horse in the mouth. Of course, Court TV is going to be TruTV after the first day of 2008, and this seems like it might be a step in that direction. As a series concept, this shows a lot of promise. I hope it's executed well -- I'll be setting the DVR to record the first two episodes of Tiger Team while I'm out of town for the holidays.
By Paul Mah | December 16, 2007, 11:59 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers a backdoor in HP and Compaq laptops, the release of Nmap 4.50, the public release of Windows Vista SP1 release candidate, source packages of SquirrelMail being compromised, an SQL Injection vulnerability found in Typo3 CMS, the release of 11 packages in December's Patch Tuesday, and vulnerabilities in earlier versions of DirectX and DirectShow.
By Mike Mullins | December 13, 2007, 12:36 PM PST
Assessing your network for potential risks is part of the responsibility of providing network services -- if you don't find the problems on your network, you can be sure someone else will. Learn the four phases of an effective network risk assessment, and get best practices for conducting each phase.
By Chad Perrin | December 13, 2007, 11:02 AM PST
Any security professional worth his salt should be familiar with Kerckhoffs' principle, which states that a cryptosystem should be secure even if everything about the design of the system is public knowledge. The same concept was expressed by Shannon's maxim, "the enemy knows the system". In either case, the implication is clear: Don't rely on obscurity for security. The term "security through obscurity" has become a pejorative one in professional security circles. The way many people describe it, it refers to hiding the details of a set of security procedures because they aren't strong enough to stand on their own. One might define security through obscurity as security that relies on the stupidity of the enemy -- which is generally regarded as a bad idea. There are two sides to the "security through obscurity" coin:
- Intentional Security Through Obscurity: Security through obscurity may refer to an intentional act of trying to maintain or strengthen security by keeping security policies and procedures secret. This approach to security is behind such common vendor behavior as attempting to keep any and all vulnerability discoveries secret until after the vendor has the opportunity to release a patch (and spin the story to make the vendor sound good, of course). This occasionally has the effect of actually punishing security researchers for doing their jobs, and is generally more of a means of protecting the vendor than the end user. When security professionals talk about "security through obscurity", this is usually what they mean.
- Accidental Security Through Obscurity: In a more casual sense, the term "security through obscurity" is sometimes used to refer to the idea that a less well-known, less common, and thus less inviting target appears more secure statistically, even if it is not more secure technically. This is the concept behind statements commonly made on the Microsoft Windows side of the Windows/Linux security debate such as "Linux will have just as many security problems as Windows if it ever becomes as popular." The way the argument works is expressed by another formulation of the same idea: "Linux only looks more secure because it's so unpopular that nobody bothers to attack it."
- MS Windows suffers a greater statistical incidence of breaches than MacOS X.
- MacOS X suffers a greater statistical incidence of breaches than (most) Linux distributions.
- Linux distros tend to suffer a greater statistical incidence of breaches than FreeBSD.
- FreeBSD suffers a greater statistical incidence of breaches than OpenBSD.
- Does the popularity of MS Windows make it a bigger target, thus leading to a greater statistical incidence of security breaches?
- Does a poor technical design with regard to security contribute to greater popularity for MS Windows?
- Is there some single cause of both greater popularity and poorer technical security design?
- Is there some single cause of both greater popularity and higher profile as a target aside from popularity itself?
- Is this apparent correlation all the result of a biased sampling of operating systems?
By Paul Mah | December 9, 2007, 10:45 PM PST
Hereâ€™s a collection of recent security vulnerabilities and alerts, which covers two security updates released by Novell, updates for avast! antivirus, Skype and Camino 1.5.4 for the Mac OS X, a DoS vulnerability in the Cisco 7940 SIP phone, vulnerability issues with Windows Media Player, a latest legal threat made against security site Secunia, and how researchers were able to circumvent anonymization by Netflix.
By Chad Perrin | December 9, 2007, 3:46 PM PST
In addition to its common use for generating hashes used to verify the integrity of a downloaded file, the MD5 algorithm is also used widely for password authentication systems. It became the most common Unix password hash algorithm in the 1990s, in fact, and many Unix-like systems still default to MD5 for generating password hashes for purposes of backward compatibility. Unfortunately, MD5 is not a good password hash algorithm. The first major MD5 weakness was discovered as long ago as 1996. Since then, cryptographers have generally recommended the use of other algorithms, such as SHA-1 and Blowfish. The problem with the MD5 hash algorithm is that it suffers from a collision weakness. This means that someone could generate two separate inputs that both produce the same hash output from the MD5 algorithm. There are some significant negative security implications for this. For instance, someone could create two files that produce the same cryptographic hash, one of which appears to be innocuous and the other of which matches the hash of the first but in some way defrauds or attacks someone who expects the innocuous message and uses the hash to verify it. Because downloading software involves an implicit trust in the provider of the software in the first place, the potential for abuse in file verification hashes is very slim. Because you do not get to choose the inputs that will match a given hash, you cannot simply generate two versions of a program -- one that is benign and one that is malign -- and use that to slip malware past someone's defenses while providing an MD5 hash for verification that both software files match. On the other hand, because in authentication systems a password's only function is to produce a given hash, and circumventing the security of the authentication system does not require tricking a human being into believing a second input to a given hash is the same as the first, the security implications of a hash algorithm's collision weakness can be far greater than in the case of verifying a file download. For instance, offline brute force attacks to crack password authentication systems in many cases generate passwords and compare them to a local copy of the password's hash. When a password that authenticates successfully is found, it can be used to authenticate on the target system. Because more than one password may work for the same MD5 hash, then, that can make a brute force attack much faster and easier. The solution to this problem, of course, is to use a cryptographic algorithm that does not have this sort of collision problem. As I already mentioned, SHA-1 was recommended as a better choice than MD5 for many years. Unfortunately, as of 2005, it has been determined that SHA-1 also appears to suffer from a collision weakness. Most modern Unix-like systems offer means of implementing the password authentication system with one of several hashing functions as its basis. Not only is the system designed to give a choice of algorithms, but it also provides a means of extending the capabilities of the system to incorporate still more choices as technologies evolve. This modularity of the password authentication system on such Unix-like OSes is an important advantage for those who need secure systems, because it allows us to alter the default behavior of the system as needed to keep up with the changing threat landscape of our increasingly networked world. As old cryptographic algorithms become obsolete or are discovered to have cryptanalysis weaknesses, newer and stronger algorithms can be substituted for them to ensure continued system security without requiring sometimes costly migrations to newer system architectures. Most modern Unix-like systems still default to MD5, but for instance both FreeBSD and Debian GNU/Linux allow you to easily choose from among MD5, DES, and Blowfish. Blowfish, a symmetric block cipher created by Bruce Schneier in 1993, is generally believed to be the strongest of the three, and should be your cryptographic algorithm of choice for authentication on most Unix-like systems at this time. In fact, Blowfish was created as a replacement for DES, and there are no known cryptanalysis weaknesses in the Blowfish cipher. It is incredibly easy to change your password authentication system's cryptographic function from the default MD5 to Blowfish on FreeBSD:
- First, edit the file /etc/login.conf so that the line reading :passwd_format=md5:\ now reads :passwd_format=blf:\ instead.
- Next, rebuild the login database with the command
> cap_mkdb /etc/login.conf
- Finally, edit the file /etc/auth so the line crypt_default = md5 now reads crypt_default = blf. Make sure the line is not commented out after editing it, by deleting the # at the beginning of the line.
- First, install the libpam-unix2 module. That can be done simply via APT, Debian's software management system, using the command
# apt-get install libpam-unix2
- Next, edit /etc/pam.d/common-auth, /etc/pam.d/common-account, /etc/pam.d/common-session, and /etc/pam.d/common-password so that in each file you replace pam_unix.so with pam_unix2.so.
- Finally, while you are editing the common-password file, change the term md5 so that it reads blowfish instead.
By Chad Perrin | December 7, 2007, 1:16 PM PST
That one section of the file logs in as administrator, if you are not, turns off warnings, collects data from your computer, sends that data to Microsoft, then turns warnings back on and logs off as administrator.This quote explains the suggestion that Microsoft may be breaking the law (again):
Q: What information is collected from my computer? A: The genuine validation process will collect information about your system to determine if your Microsoft software is genuine. The validation tools do not collect your name, address, e-mail address, or any other information that Microsoft will use to identify you or contact you. The tools collect such information as:On the other hand, the implementation of this feature of WGA/MGA behavior leaves something to be desired:
- Computer make and model
- Version information for the operating system and software using Genuine Advantage
- Region and language setting
- A unique number assigned to your computer by the tools (Globally Unique Identifier or GUID)
- Product ID and product key
- BIOS name, revision number, and revision date
- Volume serial number
- Office product key (if validating Office)
- Whether the installation was successful
- The result of the validation check
- A tool that logs itself into an account with administrative access, then turns off the system's security warnings system, constitutes a tremendous potential security threat -- even if the tool itself is not malicious. The potential for abuse is a touch disturbing to consider.
- It's also interesting to note that the behavior of WGA/MGA is something that MS Windows' own security features would consider a threat, necessitating this temporary deactivation of the warning system. This strikes me as an unintentional indictment of the entire process of validation in this manner, and digital rights management systems in general. They are, in effect, legitimized malware -- and here's a demonstration of the whys and wherefores.
- The fact that this sort of behavior is even possible -- not merely as an overlooked bug, but as an intended part of the design of Microsoft's security features -- constitutes a security risk of its own. It also starts one thinking about whether this approach to producing security alerts and "protecting" the user could even be designed to disallow such security risks at all. In other words, it's a strong piece of evidence of a principle of security by which I've lived for years: Bolted-on security is not even as strong as the bolts. Call it "Perrin's principle of integrated security" if you like.
By Mike Mullins | December 6, 2007, 1:23 PM PST
The security you add when managing routers can make the difference between providing a functional and responsive network or an isolated intranet that provides services to no one. Take these steps to maintain router security.
By Chad Perrin | December 5, 2007, 11:20 PM PST
Professor Ronald Rivest of MIT created the MD5 cryptographic hash function in 1991 to replace the earlier MD4 algorithm. It employs a 128-bit hash value, typically expressed as a 32-character hexadecimal number. For instance, an MD5 hash generated from an OpenOffice.org download (v2.3.0 for Win32, English language) looks like this:
beda08800f9505117220b6db1deb453aSince that time, MD5 has become an Internet standard (see RFC 1321 for details), and has come to be used for a great many purposes. While I am not aware of any statistical studies that support or dispute this, I believe the two most common uses are:
- hash comparison for password authentication
- hash comparison to verify file integrity
- The file may have been corrupted during download, such as by lost packets if there is significant network latency.
- It's always a good idea to make sure someone has not somehow arranged for your download to be compromised so that you get a modified or different file that can be used to crack security on your computer when executed.
> md5 test.txt MD5 (test.txt) = d76b04fbbf392f6917e119bedf78d2efAs you can see by comparing this with the OpenOffice.org Using MD5 Checksums page, the FreeBSD md5 utility can be used the same way as the Linux md5sum utility. The only difference is the format of its output. While MD5 is not the strongest cryptographic hash tool in the world these days, it is still generally useful for verifying file integrity when downloading software. Because so many open source software development projects use MD5 hashes for verification, it is a good idea to learn how to use it and keep an MD5 hash generating tool handy if you ever need to go outside of a secure software management system when installing software.
By Chad Perrin | December 3, 2007, 4:21 PM PST
Have you ever wanted to learn about cryptography at college, but just never really had the opportunity? The University of Washington has made it possible without having to set foot outside your home or pay a penny in tuition fees. CSE P 590TU: Practical Aspects of Modern Cryptography is now available online. I don't know about you, but to me this sounds like it could be a lot of fun. Cryptography is one of the most interesting and important subjects in the IT industry, in my opinion -- right up there with AI. Something they both have in common is that they are open-ended fields that will not ever really have a single, clean "solution", as far as I can tell. Like strategies for a game of Go, there's always room for advancement, which means there's always another challenge ahead. As such, I'm always interested in another approach to teaching the topic. There is always something new to learn. As such, I'll be going through the materials available in this online presentation of the University of Washington course as soon as I can set the time aside. I estimate I'll get around to it in January. In a recent IBM-sponsored webcast titled Securing Networks Without Borders, in which I was one of the featured guests, the subject of how to secure our online activities when they are so rarely limited by the traditional network perimeters defined by firewalls and routers was central to discussion. In a discussion that was less than an hour long and covered such a wide range of topics, I could not be sure I knew what everyone thought about every subject of discussion, of course. It's hard to imagine, though, that a security professional like John Pironti (another featured guest) doesn't regard encryption as a matter central to the ability to secure our data when it leaves the perimeters of our networks. That being the case, it seems obvious to me that anyone with an eye toward the effective security techniques of the near future should familiarize himself or herself with the basic concepts of cryptography. While I have not been through all the materials yet, the textbook list alone is encouraging. Some of the most respected introductory texts on cryptography in the world are represented there. People ask me fairly often about the best places to start learning about IT security, and the question came up in that 28 November webcast. I wrote an article about that for my first entry here at TechRepublic's IT Security weblog, but that was only a general overview to give you an idea where to start searching. More specifics depend on which areas of IT security you think deserve your focus. Your needs will differ depending on what you are going to do with your growing knowledge, of course, such as whether you will be writing software for a web startup or protecting a medical records database -- or even just trying to protect yourself while using the wireless network at a coffee shop. If you see cryptography in your future, the free, online availability of a complete college course in "practical aspects of modern cryptography", complete with presentation slides and recordings of class sessions, is nothing to sneeze at.
By Paul Mah | November 30, 2007, 11:59 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers a new QuickTime bug that affects both XP and Vista, a new release of FireFox, security updates for FreeBSD, the release of Microsoft Exchange SP1, the official acknowledgement by Cisco of flaws in its VoIP phones, new versions of Asterisk that fixes two SQL injection vulnerabilities, the cracking of Microsoft's encryption for its wireless keyboards, and vulnerability found on IBM's Lotus Notes product.
By Chad Perrin | November 30, 2007, 12:24 PM PST
Wired Magazine's "blog network" ran a story early this month about encrypted webmail provider Hushmail. The company's marketing is very heavy with the "your emails are safe and nobody can read it" rhetoric, going so far as to say:
not even a Hushmail employee with access to our servers can read your encrypted e-mailAs the Wired weblog article points out, though, this is apparently nothing more than exaggeration and outright falsehood. Despite the fact that even Hushmail supposedly cannot read your emails, the company reportedly turned over 12 CDs full of saved email correspondence from three Hushmail accounts to Canadian officials, complying with a court order. There are some reasonable caveats to the statement that Hushmail (the company) cannot read emails sent via Hushmail (the software and service), and I'll get to those in a moment. First, let's talk about relevance. Some of you, reading that Wired article, may think "Well that doesn't apply to me. I'm not a black market steroid dealer." On the other hand, at the time the court order was issued, the targets of this surveillance weren't known steroid dealers, either. They were just suspected steroid dealers. Innocent people get investigated all the time; the hope is that they're determined to be innocent before the investigation ruins their lives. Let's assume the system always works in that regard -- not just that a trial in a court of law would find you innocent, but even that law enforcement officers determine your innocence early enough that it never gets to court, even early enough that the school where you teach third grade children never finds out you were naked at Woodstock. Let's look at the privacy angle. Regardless of what an investigation determines about your guilt or innocence, if the investigators get access to nominally encrypted emails, your privacy is breached. It's too late to undo that damage. Even if they're investigating the wrong person, and even if the information in your emails not only doesn't prove you're guilty but isn't even useful in proving you're innocent, if your email correspondence was recorded on some of those twelve CDs, someone read it. Not only that, but for someone in government to read it, someone at Hushmail had to recover it -- which means it is recoverable. If it can be recovered once, it can obviously be recovered again. So much for privacy. Now for the caveats. Hushmail's claims about how not even Hushmail employees can read your email assume you are using its Java-based client. The suspected steroid dealers in question were apparently using the server-side encryption option. Apparently, a lot of people use this option because they do not like the hassle of downloading and installing a JVM and using Hushmail's Java-based interface. I don't really blame them for that, but using the server-side encryption system punches a great big hole in your assumed privacy. The fact that the email is initially sent to the server over an SSL-encrypted connection is designed to ensure that Hushmail receives the text of your email without an outsider being able to eavesdrop. It doesn't in any way protect against an insider at Hushmail being able to read the text of the email -- and apparently Hushmail stores the emails in a readable form after they have been encrypted and sent on to the destination. One might think this just means that you shouldn't use the server side encryption option if you actually care about privacy for your emails. This is true, as far as it goes, except for that word "just". It goes beyond that. Unlike open source OpenPGP software like GnuPG, Hushmail's Java client is not subject to public scrutiny. Aside from the obvious problems of encryption software that doesn't trust its users, there's also the simple fact that you do not really know it is doing what you expect, nor do you have any reasonable way to find out. Source code that is open to review but not available to be compiled and used directly (like Hushmail's Java client) -- where providing source code is separate from running compiled software -- provides nothing more than an illusion of openness. In addition, keeping software up-to-date with regard to security patches is important, but it can also be equivalent to an attack vector itself, at least in the case of closed-source software designed to provide security and privacy. After all, if a closed source encryption software vendor decides it needs the ability to recover plain text from emails encrypted by the client software, it can always just push out a "security patch" that allows it to harvest private encryption keys. The end user need never know. Worse yet, Hushmail's Java interface uses an applet -- not a locally installed application -- which means that one never even really knows that the Java applet being executed this time is the same one that was used last time, because it is downloaded as compiled bytecode all over again every time it is used. In IT security as in any other field, for maximum effectiveness one needs to let an expert do much of the work. There just isn't enough time in the day for everyone to do everything oneself, nor even to learn enough to be able to do everything oneself if one had more time in the day to do it. It is for this reason that I use a lot of encryption software written by other people, rather than writing it all myself, and I do not expect any of you to behave any differently when choosing encryption solutions. There are things you can, and should, do yourself if at all possible. What those things are may vary from case to case, based on your specific privacy needs. Sorting out what you need to do for yourself depends on the ability to correctly judge where you reach a point of diminishing returns. If you do not want people you don't even know to be able to trivially decrypt your emails because they pass through those people's servers, however, it behooves you to employ encryption systems that do not rely solely on the good will of those people to maintain your privacy. You may not realize it yet, but you can gain many of the benefits of doing things yourself without actually doing them. The simple fact that you can (at least in theory) do something yourself is often helpful in providing some of the benefit of actually doing it. Even more of that benefit can often be gained by dint of the probability that someone else is doing it, even if you have no contact with any such people and cannot name them. The following is a list of three examples. The first is an example of what you could easily do yourself. The second is an example of something you can benefit from someone else doing. The third is an example of something you can benefit from being able to do, in theory, even if nobody does it.
- Use encryption software that gives you direct control over end-to-end encryption privacy, without a middle man doing all the "hard" stuff for you. For instance, use GnuPG and your local mail user agent software to encrypt and send emails without any reliance on a third-party encryption "service" provider like Hushmail.
- Enjoy the security benefits of a community of thousands of developers who examine the source code and operation of the encryption software you use for signs it misbehaves in some way. With any popular open source software (such as GnuPG), there are enough people poring over the source code in any give day that you can be pretty sure no intentional security holes will survive and anything accidental is likely to be caught by a "good guy" before a "bad guy" can find it. Security patches will tend more often to be effective and quick.
- Ensure you have access to source code yourself. If for some reason you have to use a closed source application for your mail user agent, and if you are buying software for use by your company, you can often arrange to license access to the source code and compile the application yourself -- so long as you do not try to modify it or redistribute any part of it. This provides you the ability to personally do the same thing that the open source community at large does with open source software. Even if you do not actually intend to pore over source code looking for intentional back doors, getting access to the source code in this manner "just in case" provides some fairly strong assurance that the providing vendor is not trying to hide anything from you. Of course, as with the hypothetical client patch issue above, this doesn't help if you do not have the option of downloading patch source and recompiling the application yourself rather than just applying binary patches.
By Mike Mullins | November 29, 2007, 1:09 PM PST
If you're not too familiar with the registry and how it works, there are a slew of different companies that would like to sell you a registry cleaner. Do you need to clean your registry? Let's look at the facts.
By Paul Mah | November 28, 2007, 6:01 PM PST
In my last post, I talked about the dangers that the humble USB port can pose to the unsuspecting security administrator. I also suggested some possible ways of dealing with this often overlooked vector. This time, I want to talk about one of my suggestions -- whitelisting. It's a technology that's been around for a while now, but it's something that antivirus companies probably don't want you to know too much about.
By Chad Perrin | November 28, 2007, 12:24 AM PST
The Identity Theft Enforcement and Restitution Act of 2007 passed the Senate by unanimous consent. As is often the case in our nation's legislature, the two houses of the federal legislature -- the House of Representatives and the Senate -- are working at roughly redundant purposes, and have each worked on very similar bills. The House version, however, has not yet left subcommittee deliberation for consideration by the House of Representatives at large. The Senate bill, should it be enacted as law, amends Title 18 of the US Code to address conspiracy to commit what our Congress terms "cybercrime", close loopholes in current law against extortion, give victims of identity theft increased ability to seek restitution, and specifically address the phenomenon of botnets. Dealing with botnets is attempted in the ITERAct by making it a crime to "damage", whatever that means, ten or more computers in a single year. Tim Bennett, the president of the CSIA, said "This cybercrime bill is an integral part of the cybercrime fight, but it is also imperative that this Congress address through legislation other aspects of the problem, such as data security, to prevent criminals from getting sensitive personal information in the first place." Security industry vendors don't seem terribly optimistic about the prospects of such a bill passing the House of Representatives before the end of the year, however, considering the way most of the House's time has been diverted by matters related to the war in Iraq and "homeland security". Add to that the return of deliberations over fiscal year budgeting, and it's no wonder Symantec's federal government relations manager Kevin Richards said "prospects for this year don't look so good". If such a data protection bill passes Congress, its phrasing will bear watching. It could easily go one of several ways. The best possible outcome, in my estimation, would be a true digital privacy bill that reinforces the implications of the Fourth and Fifth Amendments of the US Constitution. Assuming it was more than a lame duck law, such an act would to a significant degree protect against the type of abuses of power we've seen in wiretap scandals of recent years, USA PATRIOT Act provisions, and the potential for NSA-designed backdoors in common encryption standards such as the speculated intentional weakness in the Dual_EC_DRBG NIST encryption standard. While the above-linked Wired article by Bruce Schneier is certainly worth the read, I'll summarize a bit for you:
- NIST released a new official standard for random number generation software used in encryption algorithms, called NIST Special Publication 800-90 [PDF].
- That standard defines a set of four DRBGs approved for government use and recommended for widespread public use.
- The NSA championed the elliptical curve based generator, Dual_EC_DRBG, for inclusion in the NIST standards.
- Dual_EC_DRBG is slower than pond scum running uphill and contains a small, but measurable, numerical bias -- problems none of the other new NIST standard DRBGs share, which makes one wonder why the NSA bothered to push for its inclusion.
- Dual_EC_DRBG contains a mathematical "back door", one that may or may not have been intentional and for which the NSA may or may not have the key. Reverse-engineering the key should be a significantly difficult task, perhaps effectively impossible at current technology levels, but it could very easily have been generated at the time of creation of the constants used to define the algorithm's elliptic curve. For more information on what that means, I recommend some heavy Googling -- it's a subject well beyond the scope of this article.
By Chad Perrin | November 25, 2007, 8:26 PM PST
There's an old saying, usually attributed to Confucius, that goes something like "Give a man a fish, and you'll feed him for a day. Teach a man to fish, and you've fed him for a lifetime." There's an important life lesson in that simple statement. Some people translate it conceptually into something like "Education is the most important thing you can give someone to better his circumstances." I'm not sure that's really getting to the heart of the matter, or always accurate for that matter -- though it's probably close enough for government work. The translation I like goes something like this: Give a man the answer, and he'll only have a temporary solution. Teach him the principles that led you to that answer, and he will be able to create his own solutions in the future. It's considerably less catchy, of course, but I think it gets down to brass tacks much better than limiting the meaning of the aphorism to traditional charity. If you go with the education translation, you're talking about nothing but how to elevate the standard of living in third world countries, which is important but hardly the one universal problem of life. In fact, the quote about education doesn't even make full use of the statement within the context of education, because formal education too often consists of nothing more than making children memorize answers, ignoring the importance of teaching them how to get to those answers in the first place. If, on the other hand, you refer to the difference between temporary solutions and principles for solving problems, you may very well not only improve someone's standard of living, but give that person the tools to improve himself (or herself, naturally). This is a central theme of most of my interactions with others when I discuss IT security. In IT security, more so than in many other fields of study and practice, it is important to be able to think for yourself, reason through the implications of what you are doing, and employ fundamental principles to come to sound conclusions. In many fields of endeavor, little more is required for success than memorizing some formulaic solutions developed by deep thinkers of the past who pioneered the field. IT security is a far more competitive field than most, however, because the primary concern of the IT security professional is someone trying to circumvent all his efforts. As a result of this state of affairs, the ability to reason from principles is all-important. Mere robotic imitation of "best practices" is not sufficient for any certainty of success. This is why many of the responsibilities of the IT security professional cannot simply be automated away. Automation decreases the workload, but it cannot effectively eliminate the workload entirely, even though the entire IT field is about automation. This is why my articles here in the TechRepublic IT Security weblog often focus on principles rather than recipes. Security recipes can be useful, too, of course -- and I have nothing against providing them, even given their necessarily temporary usefulness -- but the most important security writing I can do is to address basic principles. This applies to both what principles I know and how one can and should go about discovering more principles on one's own, even as far as discovering any flaws in the principles I offer. In my consulting work, and when writing documentation, I try to teach the clients and end users of my work the principles behind what has been done. Simply encouraging rote memorization of steps one should take in the short term is tantamount to encouraging someone's information technology systems to fail in the long term. The same is true of providing systems that attempt to automate away any user interaction without teaching the user about what is going on behind the scenes and why. When you not only fail to teach the principles to the end user, but actively hide the details of how things work, you are very directly setting the end user up for failure -- whether you intend that result or not. Some unscrupulous people regard such inevitable failure as job security. Some ignorant people regard it as an inaccurate estimate of the state of information technology, believing that somewhere out there someone can actually produce a system that does not require a knowledgeable user to ensure it will not fail spectacularly. While the user does not need to know everything about the system to ensure it continues to work, he or she does need to know enough to be able to check on how well it is working, and also needs to be willing and able to learn more about it as needed when problems arise. Passivity, especially in the realm of IT security, is usually a recipe for failure. An aphorism that is related to the one about teaching a man to fish, and similarly applicable to far more than just IT security, is one I made up years ago and have used when relevant ever since: The mark of a true professional is one who works toward the day he or she is obsolete. If you are an IT security consultant, and you are not helping your clients learn how to get along without your services, you are not really doing your job. Keep that in mind when you consider the ethics of your decisions as an IT professional.
By Chad Perrin | November 23, 2007, 11:41 AM PST
In early October of this year, Indiana University graduate student Christopher Soghoian gave a presentation in Washington, DC about the potential risks of online political contributions. While I wasn't able to attend the presentation (I was about 1,500 miles away from DC at the time), the subject is an interesting one at first glance. Soghoian's claim is that online political contribution channels provide a brand new means of defrauding Americans. In the words of the above-linked Wired article:
The presidential campaigns' tactic of relying on impulsive giving spurred by controversial news events and hyped-up deadlines, combined with a number of other factors such as inconsistent Web addresses and a muddle of payment mechanisms creates a conducive environment for fraud, says Soghoian.One wonders what behavior in particular Soghoian observed that prompted him to address the matter. In some respects, it seems he might be reacting to the pledge drives for Republican candidate Ron Paul, a dark horse candidate who went from "no chance in the world" to "largest single day of campaign contributions before the primaries in history", in part because of such a pledge drive. The Remember the 5th of November donation drive netted him more than four million dollars of donations in a single day. There are of course other candidates attempting to achieve similar results. They tend to differ from the Ron Paul effort in a number of ways, however:
- They aren't generally grass-roots efforts. Most pledge drives for other candidates are organized with the official sanction and aid of the respective campaigns themselves, whereas the Ron Paul funding drive was organized as a grass-roots effort. There's another such effort gearing up for December 16th, too, apparently affiliated with the same people who got the November 5th effort going.
- They have not, at least so far, been as successful in terms of the money gathered or the sheer number of contributors.
- Nobody seems to care.
Fraudsters could easily send out e-mails and establish Web sites that mimic the official campaigns' sites and similarly send out such e-mails that would encourage people to "donate" money without checking for the authenticity of the site.In other words, Soghoian's concern is that irresponsible behavior on the part of candidates' campaigns may teach people to be irresponsible with their own financial security. Soghoian claims that impulsive behavior -- akin to the "impulse buy" items in the supermarket checkout lane -- is being encouraged to get people to open up their virtual wallets and give to Presidential campaigns. The positive result, at least from a Presidential candidate's perspective, is that more money flows into the campaign war chest of the candidate. The negative result, at least according to Soghoian, is that people are being subtly trained to be less careful with their decision-making about credit card use online. Soghoian would have us believe that this somehow constitutes a new threat to the financial security of US citizens. The truth of the matter is that this is not a new threat at all, as Bruce Schneier pointed out. In Schneier's words:
Fake charities and political organizations have long been problems. When you get a solicitation in the mail for "Concerned Citizens for a More Perfect Country" -- insert whatever personal definition you have for "more perfect" and "country" -- you don't know if the money is going to your cause or into someone's pocket. When you give money on the street to someone soliciting contributions for this cause or that one, you have no idea what will happen to the money at the end of the day.The problems here, as "SteveJ" (one of Schneier's commenters) pointed out, are twofold:
Of course there are two different issues here: trust and identity. Creating a false charity/campaign, which doesn't really do what it claims with donated money (trust), isn't quite the same thing as posing as a particular charity/campaign and pocketing the donations (identity).This is a matter of authentication, put quite simply. Somehow, the place where you're donating the money must be authenticated to the satisfaction of the individual being exhorted to donate. We, as active members of US politics, must serve as individual authentication systems to ensure that any donations we give are being given to "the right person" -- both in terms of believing what the candidates say and in terms of making sure that the site you're using for your donation is actually going to put the money into your favorite candidate's campaign. Soghoian's solution is to centralize and certify all campaign contribution management with specific corporate organizations serving as clearinghouses. The specific examples cited by Soghoian and Markus Jakobsson, co-author of a whitepaper on the subject, are Paypal and Google. Ultimately, if you're paying attention enough to consider Google or Paypal to be more trustworthy than some candidate campaign Website, you're paying enough attention to be able to make some determinations of your own about who or what is trustworthy enough to send money. The difference is, at most, negligible. This also does not particularly protect you against the specific concerns Soghoian brought up for a number of reasons:
- If you are directed to a donation page from a Website designed to look like the campaign's official site, that site can be spoofed no less easily if it's at http://checkout.google.com than if it's at http://ronpaul2008.com -- the same techniques for phishers apply.
- The high-pressure marketing tactics of many pledge drives will not be changed by where the actual donation link leads, and neither will a change in link destination change the potential negative effects those tactics might have on the security awareness of their target demographics.
- Centralizing the management of all Presidential campaign donations creates a single target on which phishers and other malicious security crackers can focus, and success may bring them far greater rewards. Why settle for redirecting the donation activities for a single candidate when you can target them all simultaneously?
- A conspiracy theorist might accuse Soghoian of trying to sideline any less well-known candidates who are less likely to be able to get into Google or Paypal donation clearinghouses. Even though that is probably not his intent, it is a more likely outcome of such a centralization of management than generally improved donation security.
- Never donate to any Website other than the official campaign site. While there may be safe alternate channels for online donations, your best bet is always the official campaign Website.
- Never click on a link from a third-party pledge drive Website. Take a direct route to the campaign website, perhaps by checking the URL to which the link directs you and typing it into the address bar of your Web browser yourself. This will help you avoid spoofed sites.
- Do not copy the URL from a third-party site and paste it into the browser's address bar: actually type it. Use of Unicode characters to spoof the URLs of legitimate websites is something you want to be able to circumvent.
- Do some research via a search engine to ensure that the apparent official campaign site URL you are using is in fact the genuine article, and not a fly-by-night fake. For instance, http://johnedwards.com is the URL for the official John Edwards Presidential campaign Website, but http://johnedwardsforpresident.com and http://johnedwards2008.com are (as of this writing) still available. Either one might conceivably be used as a temporary landing area for misdirected would-be campaign contributors.
- Only contribute via campaign Websites that provide encrypted access for the transaction, and assess the security of the donation process yourself as far as possible before committing to a donation. If your candidate of choice does not provide adequate security, contact a campaign representative and inform them of your concerns. Then, either contribute offline or wait until the problems with online contribution security are fixed. You don't want your bank account to be cleaned out by malicious security crackers just to give your favorite candidate $100, I'm sure. Losing $10,000 of your hard-earned money is probably an unacceptable outcome for you, especially when you cannot be guaranteed that even that $100 donation will get through to the candidate's campaign if security is not sufficient to protect the contribution transaction.
- Vote your conscience, not your fears.