By Chad Perrin | November 28, 2007, 12:24 AM PST
The Identity Theft Enforcement and Restitution Act of 2007 passed the Senate by unanimous consent. As is often the case in our nation's legislature, the two houses of the federal legislature -- the House of Representatives and the Senate -- are working at roughly redundant purposes, and have each worked on very similar bills. The House version, however, has not yet left subcommittee deliberation for consideration by the House of Representatives at large. The Senate bill, should it be enacted as law, amends Title 18 of the US Code to address conspiracy to commit what our Congress terms "cybercrime", close loopholes in current law against extortion, give victims of identity theft increased ability to seek restitution, and specifically address the phenomenon of botnets. Dealing with botnets is attempted in the ITERAct by making it a crime to "damage", whatever that means, ten or more computers in a single year. Tim Bennett, the president of the CSIA, said "This cybercrime bill is an integral part of the cybercrime fight, but it is also imperative that this Congress address through legislation other aspects of the problem, such as data security, to prevent criminals from getting sensitive personal information in the first place." Security industry vendors don't seem terribly optimistic about the prospects of such a bill passing the House of Representatives before the end of the year, however, considering the way most of the House's time has been diverted by matters related to the war in Iraq and "homeland security". Add to that the return of deliberations over fiscal year budgeting, and it's no wonder Symantec's federal government relations manager Kevin Richards said "prospects for this year don't look so good". If such a data protection bill passes Congress, its phrasing will bear watching. It could easily go one of several ways. The best possible outcome, in my estimation, would be a true digital privacy bill that reinforces the implications of the Fourth and Fifth Amendments of the US Constitution. Assuming it was more than a lame duck law, such an act would to a significant degree protect against the type of abuses of power we've seen in wiretap scandals of recent years, USA PATRIOT Act provisions, and the potential for NSA-designed backdoors in common encryption standards such as the speculated intentional weakness in the Dual_EC_DRBG NIST encryption standard. While the above-linked Wired article by Bruce Schneier is certainly worth the read, I'll summarize a bit for you:
- NIST released a new official standard for random number generation software used in encryption algorithms, called NIST Special Publication 800-90 [PDF].
- That standard defines a set of four DRBGs approved for government use and recommended for widespread public use.
- The NSA championed the elliptical curve based generator, Dual_EC_DRBG, for inclusion in the NIST standards.
- Dual_EC_DRBG is slower than pond scum running uphill and contains a small, but measurable, numerical bias -- problems none of the other new NIST standard DRBGs share, which makes one wonder why the NSA bothered to push for its inclusion.
- Dual_EC_DRBG contains a mathematical "back door", one that may or may not have been intentional and for which the NSA may or may not have the key. Reverse-engineering the key should be a significantly difficult task, perhaps effectively impossible at current technology levels, but it could very easily have been generated at the time of creation of the constants used to define the algorithm's elliptic curve. For more information on what that means, I recommend some heavy Googling -- it's a subject well beyond the scope of this article.
By Chad Perrin | November 25, 2007, 8:26 PM PST
There's an old saying, usually attributed to Confucius, that goes something like "Give a man a fish, and you'll feed him for a day. Teach a man to fish, and you've fed him for a lifetime." There's an important life lesson in that simple statement. Some people translate it conceptually into something like "Education is the most important thing you can give someone to better his circumstances." I'm not sure that's really getting to the heart of the matter, or always accurate for that matter -- though it's probably close enough for government work. The translation I like goes something like this: Give a man the answer, and he'll only have a temporary solution. Teach him the principles that led you to that answer, and he will be able to create his own solutions in the future. It's considerably less catchy, of course, but I think it gets down to brass tacks much better than limiting the meaning of the aphorism to traditional charity. If you go with the education translation, you're talking about nothing but how to elevate the standard of living in third world countries, which is important but hardly the one universal problem of life. In fact, the quote about education doesn't even make full use of the statement within the context of education, because formal education too often consists of nothing more than making children memorize answers, ignoring the importance of teaching them how to get to those answers in the first place. If, on the other hand, you refer to the difference between temporary solutions and principles for solving problems, you may very well not only improve someone's standard of living, but give that person the tools to improve himself (or herself, naturally). This is a central theme of most of my interactions with others when I discuss IT security. In IT security, more so than in many other fields of study and practice, it is important to be able to think for yourself, reason through the implications of what you are doing, and employ fundamental principles to come to sound conclusions. In many fields of endeavor, little more is required for success than memorizing some formulaic solutions developed by deep thinkers of the past who pioneered the field. IT security is a far more competitive field than most, however, because the primary concern of the IT security professional is someone trying to circumvent all his efforts. As a result of this state of affairs, the ability to reason from principles is all-important. Mere robotic imitation of "best practices" is not sufficient for any certainty of success. This is why many of the responsibilities of the IT security professional cannot simply be automated away. Automation decreases the workload, but it cannot effectively eliminate the workload entirely, even though the entire IT field is about automation. This is why my articles here in the TechRepublic IT Security weblog often focus on principles rather than recipes. Security recipes can be useful, too, of course -- and I have nothing against providing them, even given their necessarily temporary usefulness -- but the most important security writing I can do is to address basic principles. This applies to both what principles I know and how one can and should go about discovering more principles on one's own, even as far as discovering any flaws in the principles I offer. In my consulting work, and when writing documentation, I try to teach the clients and end users of my work the principles behind what has been done. Simply encouraging rote memorization of steps one should take in the short term is tantamount to encouraging someone's information technology systems to fail in the long term. The same is true of providing systems that attempt to automate away any user interaction without teaching the user about what is going on behind the scenes and why. When you not only fail to teach the principles to the end user, but actively hide the details of how things work, you are very directly setting the end user up for failure -- whether you intend that result or not. Some unscrupulous people regard such inevitable failure as job security. Some ignorant people regard it as an inaccurate estimate of the state of information technology, believing that somewhere out there someone can actually produce a system that does not require a knowledgeable user to ensure it will not fail spectacularly. While the user does not need to know everything about the system to ensure it continues to work, he or she does need to know enough to be able to check on how well it is working, and also needs to be willing and able to learn more about it as needed when problems arise. Passivity, especially in the realm of IT security, is usually a recipe for failure. An aphorism that is related to the one about teaching a man to fish, and similarly applicable to far more than just IT security, is one I made up years ago and have used when relevant ever since: The mark of a true professional is one who works toward the day he or she is obsolete. If you are an IT security consultant, and you are not helping your clients learn how to get along without your services, you are not really doing your job. Keep that in mind when you consider the ethics of your decisions as an IT professional.
By Chad Perrin | November 23, 2007, 11:41 AM PST
In early October of this year, Indiana University graduate student Christopher Soghoian gave a presentation in Washington, DC about the potential risks of online political contributions. While I wasn't able to attend the presentation (I was about 1,500 miles away from DC at the time), the subject is an interesting one at first glance. Soghoian's claim is that online political contribution channels provide a brand new means of defrauding Americans. In the words of the above-linked Wired article:
The presidential campaigns' tactic of relying on impulsive giving spurred by controversial news events and hyped-up deadlines, combined with a number of other factors such as inconsistent Web addresses and a muddle of payment mechanisms creates a conducive environment for fraud, says Soghoian.One wonders what behavior in particular Soghoian observed that prompted him to address the matter. In some respects, it seems he might be reacting to the pledge drives for Republican candidate Ron Paul, a dark horse candidate who went from "no chance in the world" to "largest single day of campaign contributions before the primaries in history", in part because of such a pledge drive. The Remember the 5th of November donation drive netted him more than four million dollars of donations in a single day. There are of course other candidates attempting to achieve similar results. They tend to differ from the Ron Paul effort in a number of ways, however:
- They aren't generally grass-roots efforts. Most pledge drives for other candidates are organized with the official sanction and aid of the respective campaigns themselves, whereas the Ron Paul funding drive was organized as a grass-roots effort. There's another such effort gearing up for December 16th, too, apparently affiliated with the same people who got the November 5th effort going.
- They have not, at least so far, been as successful in terms of the money gathered or the sheer number of contributors.
- Nobody seems to care.
Fraudsters could easily send out e-mails and establish Web sites that mimic the official campaigns' sites and similarly send out such e-mails that would encourage people to "donate" money without checking for the authenticity of the site.In other words, Soghoian's concern is that irresponsible behavior on the part of candidates' campaigns may teach people to be irresponsible with their own financial security. Soghoian claims that impulsive behavior -- akin to the "impulse buy" items in the supermarket checkout lane -- is being encouraged to get people to open up their virtual wallets and give to Presidential campaigns. The positive result, at least from a Presidential candidate's perspective, is that more money flows into the campaign war chest of the candidate. The negative result, at least according to Soghoian, is that people are being subtly trained to be less careful with their decision-making about credit card use online. Soghoian would have us believe that this somehow constitutes a new threat to the financial security of US citizens. The truth of the matter is that this is not a new threat at all, as Bruce Schneier pointed out. In Schneier's words:
Fake charities and political organizations have long been problems. When you get a solicitation in the mail for "Concerned Citizens for a More Perfect Country" -- insert whatever personal definition you have for "more perfect" and "country" -- you don't know if the money is going to your cause or into someone's pocket. When you give money on the street to someone soliciting contributions for this cause or that one, you have no idea what will happen to the money at the end of the day.The problems here, as "SteveJ" (one of Schneier's commenters) pointed out, are twofold:
Of course there are two different issues here: trust and identity. Creating a false charity/campaign, which doesn't really do what it claims with donated money (trust), isn't quite the same thing as posing as a particular charity/campaign and pocketing the donations (identity).This is a matter of authentication, put quite simply. Somehow, the place where you're donating the money must be authenticated to the satisfaction of the individual being exhorted to donate. We, as active members of US politics, must serve as individual authentication systems to ensure that any donations we give are being given to "the right person" -- both in terms of believing what the candidates say and in terms of making sure that the site you're using for your donation is actually going to put the money into your favorite candidate's campaign. Soghoian's solution is to centralize and certify all campaign contribution management with specific corporate organizations serving as clearinghouses. The specific examples cited by Soghoian and Markus Jakobsson, co-author of a whitepaper on the subject, are Paypal and Google. Ultimately, if you're paying attention enough to consider Google or Paypal to be more trustworthy than some candidate campaign Website, you're paying enough attention to be able to make some determinations of your own about who or what is trustworthy enough to send money. The difference is, at most, negligible. This also does not particularly protect you against the specific concerns Soghoian brought up for a number of reasons:
- If you are directed to a donation page from a Website designed to look like the campaign's official site, that site can be spoofed no less easily if it's at http://checkout.google.com than if it's at http://ronpaul2008.com -- the same techniques for phishers apply.
- The high-pressure marketing tactics of many pledge drives will not be changed by where the actual donation link leads, and neither will a change in link destination change the potential negative effects those tactics might have on the security awareness of their target demographics.
- Centralizing the management of all Presidential campaign donations creates a single target on which phishers and other malicious security crackers can focus, and success may bring them far greater rewards. Why settle for redirecting the donation activities for a single candidate when you can target them all simultaneously?
- A conspiracy theorist might accuse Soghoian of trying to sideline any less well-known candidates who are less likely to be able to get into Google or Paypal donation clearinghouses. Even though that is probably not his intent, it is a more likely outcome of such a centralization of management than generally improved donation security.
- Never donate to any Website other than the official campaign site. While there may be safe alternate channels for online donations, your best bet is always the official campaign Website.
- Never click on a link from a third-party pledge drive Website. Take a direct route to the campaign website, perhaps by checking the URL to which the link directs you and typing it into the address bar of your Web browser yourself. This will help you avoid spoofed sites.
- Do not copy the URL from a third-party site and paste it into the browser's address bar: actually type it. Use of Unicode characters to spoof the URLs of legitimate websites is something you want to be able to circumvent.
- Do some research via a search engine to ensure that the apparent official campaign site URL you are using is in fact the genuine article, and not a fly-by-night fake. For instance, http://johnedwards.com is the URL for the official John Edwards Presidential campaign Website, but http://johnedwardsforpresident.com and http://johnedwards2008.com are (as of this writing) still available. Either one might conceivably be used as a temporary landing area for misdirected would-be campaign contributors.
- Only contribute via campaign Websites that provide encrypted access for the transaction, and assess the security of the donation process yourself as far as possible before committing to a donation. If your candidate of choice does not provide adequate security, contact a campaign representative and inform them of your concerns. Then, either contribute offline or wait until the problems with online contribution security are fixed. You don't want your bank account to be cleaned out by malicious security crackers just to give your favorite candidate $100, I'm sure. Losing $10,000 of your hard-earned money is probably an unacceptable outcome for you, especially when you cannot be guaranteed that even that $100 donation will get through to the candidate's campaign if security is not sufficient to protect the contribution transaction.
- Vote your conscience, not your fears.
By Paul Mah | November 20, 2007, 7:30 PM PST
For all the latest in expensive security software and peripherals that money can acquire, enterprises inevitably still miss some security holes. It might surprise you, but one security hole often missed out by security managers is the humble universal serial bus (USB) port.
By Chad Perrin | November 19, 2007, 11:18 AM PST
Wireless networking can be kind of scary from a security standpoint. It opens up whole new attack vectors that were not present with wired network infrastructures. That doesn't mean you can't do it securely, however, and I aim to give you some ideas that can help you in that regard. Many of these tips are likely to be inapplicable to a lot of people. For instance, if you're running a wireless network that has to allow connections from a changing lineup of computers so that the specific computers on the network will not be constant, the point about restricting access by MAC address is unlikely to do much good. As always, you must exercise some common sense when reading through a list of security tips like this. You have to determine what options apply to you, and whether the fact that your plans make a given suggestion unusable means your plans are wrong or the suggestion simply is not relevant in your case.
- Use a strong password. As I pointed out in the article A little more about passwords, a sufficiently strong password (on a system with decent password protection) makes the likelihood of cracking the password through brute force attacks effectively impossible. Using a sufficiently weak password, on the other hand, almost guarantees that your system will be compromised at some point.
- Don't broadcast your SSID. Serious security crackers who know what they are doing will not be deterred by a hidden SSID -- the "name" you give your wireless network. Configuring your wireless router so it doesn't broadcast your SSID does not provide "real" security, but it does help play the "low hanging fruit" game pretty well. A lot of lower-tier security crackers and mobile malicious code like botnet worms will scan for easily discovered information about networks and computers, and attack those that have characteristics that make them appear easy to compromise. One of those is a broadcast SSID, and you can cut down on the amount of traffic your network gets from people trying to exploit vulnerabilities on random networks by hiding your SSID. Most commercial grade router/firewall devices provide a setting for this.
- Use good wireless encryption. WEP is not exactly "good" encryption. With a freely available tool like aircrack, you can sniff wireless traffic protected by WEP and crack security on that network in a matter of minutes. WPA is the current, common encryption standard you should probably be using -- though, of course, you should use something stronger as soon as it becomes available to you. Technology is advancing every day, on both sides of the encryption arms race, after all.
- Use another layer of encryption when possible. Don't just rely on wireless encryption to provide all your security on wireless networks. Other forms of encryption can improve the security of the systems on the network, even if someone happens to gain access to the network itself. For instance, OpenSSH is an excellent choice for providing secure communications between computers on the same network, as well as across the Internet. Using encryption to protect your wireless network does not protect any communications that leave the network, so encryption schemes like SSL for dealing with e-commerce Websites is still of critical importance. The fact you're using one type of encryption in no way suggests you should not be using other types of encryption as well.
- Restrict access by MAC address. Many will tell you that MAC address restriction doesn't provide real protection but, like hiding your wireless network's SSID, restricting the MAC addresses allowed to connect to the network helps ensure you are not one of the "low hanging fruits" that people prefer to attack. It is best to be effectively invulnerable to the expert security cracker, but there's nothing wrong with being less palatable to the amateur as well.
- Shut down the network when it's not being used. This bit of advice is even more dependent on specific circumstances than most of them. If you have the sort of network that does not need to be running twenty-four hours a day, seven days a week, you can reduce the availability of it to security crackers by turning it off when it isn't in use. While many of us run networks that never sleep, and cannot really put this suggestion into practice, it is worth mentioning if only because one of the greatest improvements to the security of a system you will ever encounter is to simply turn it off. Nobody can access what isn't there.
- Shut down your wireless network interface, too. If you have a mobile device such as a laptop that you carry around with you and use in public, you should have the wireless network interface turned off by default. Only turn it on when you actually need to connect to a wireless network. The rest of the time, an active wireless network interface is nothing more than another attack vector for malicious security crackers to use as a target.
- Monitor your network for intruders. You should always make sure you have an eye on what's going on, that you are tracking attack trends. The more you know about what malicious security crackers are trying to do to your network, the better the job of defending against them you can do. Collect logs on scans and access attempts, use any of the hundreds of statistics generating tools that exist to turn those logs into more useful information, and set up your logging server to email you when something really anomalous happens. As a certain cartoon military SpecOps team from the 1980s would tell you, knowing about the danger is half the battle.
- Cover the bases. Make sure you have some kind of good firewall running, whether on a wireless router or on a laptop you use to connect to wireless networks away from home. Make sure you turn off unneeded services, especially on MS Windows where the unneeded services that are active by default might surprise you. In fact, do everything you can to secure your system regardless of OS platform, mobility of the system, or type of network.
- Don't waste your time on ineffective security measures. Every now and then, I run across some technically deficient end user handing out free advice about security based on things overheard and half-understood. Generally, this advice is merely useless, though often enough it can be downright harmful. The single most common bit of bad advice I hear from such people with regard to wireless networking is the admonition that when connecting to a public wireless network, such as in a coffee shop, you should only connect if the network uses wireless encryption. Sometimes these people get the advice half right, and recommend only connecting to networks protected by WPA -- it's half right only because WPA is the wireless encryption you should use, if you are going to use wireless encryption at all. There is no point in trying to "protect" yourself by connecting to a public access point only if it uses encryption, however, because the fact that the encryption key will be handed out to anyone that asks for it completely obviates the supposed protection you expect. It's a bit like locking the front door of the house, but leaving a big sign on the door that says "The key is under the welcome mat," which only protects against illiterate burglars. If you want your network to be available to everyone that walks onto the premises, just leave it unencrypted, and if you need to connect to the Internet in some public location, don't worry about encryption. In fact, if anything, the wireless encryption might more properly serve as a deterrent rather than an enticement to using that particular wireless network, because it reduces convenience without effectively improving security at all.
By Chad Perrin | November 17, 2007, 10:24 PM PST
Music fans, recording artists, journalists, the RIAA, and digital rights activists have at least one thing in common right now. I'm speaking of the intense interests some people from each group have in the outcome of Radiohead's recent experiment in business models for musicians, of course. There are people on every side of the issue of how the Internet affects content publishing industries making all kinds of wild claims about what is going to happen as more and more ease of duplication and distribution comes to the end user. There are those who point to examples of book authors who have gained a following and a foothold in the market by offering their books online, self-publishing essentially for free, and ended up making a tidy profit and attracting book deals from major New York publishing houses. There are those such as the RIAA, MPAA, and Microsoft who claim that copyright violation -- or "piracy", as they are so fond of calling it -- is materially damaging their business and is morally equivalent to theft, even if a court of law does not consider it equivalent. There are also those, such as the Free Software Foundation and the OpenBSD project, who see the Internet as the single most effective tool for improving the state of the art of software ever discovered. Finally, there are those like Radiohead who see a tremendous opportunity for the actual content producers, the artists at the root of the entire music industry. By extension, what Radiohead is doing may have important implications for producers of every form of copyrightable material that can be distributed over the Internet. That includes software, both fiction and nonfiction prose, movies, music, and photography, among other things. What Radiohead is doing is bold and -- at least at their level of prominence -- unprecedented. The critically acclaimed band's newest album, In Rainbows, can be ordered as an impressive collector's edition including lots of extras from the Website, of course -- and for the impressive (i.e. not cheap) price of forty British pounds. That comes out to about US$80 at the current exchange rate, give or take a few. It's being produced and sold without help or funding by any major RIAA record label, but that's not the controversial part of the deal. What has everyone up in arms is the other purchase you can make at the In Rainbows website: a digital download of the album, in a simple compressed ZIP archive, with no DRM. The most surprising thing about it is the price, which is whatever you want to pay. No, really. Radiohead charges whatever you want to pay. If you want to download it for free, that's fine. If you want to pay thirty British pounds for it, great. Radiohead seems to be banking on the idea that saving all the RIAA marketing, distribution, and other overhead expenses, combined with what RIAA spokespeople would surely call unrealistic optimism, will lead to greater personal profits for the band than they could ever hope to achieve via the traditional recording industry business model.
How's it working out?According to a report presenting statistics gathered by comScore, 38% of people worldwide who downloaded In Rainbows paid something for it, which leaves about 62% who "freeloaded". The numbers vary a bit based on location, of course: in the United States, the reported numbers are 40% and 60%, respectively, showing a slightly higher likelihood for US residents to pay than downloaders in the rest of the world. Keep in mind that only Radiohead and its affiliates know for sure how many downloads there have been, how much money has been paid for them, and so on -- and Radiohead disputes the data, suggesting instead that most fans that have ordered the download chose to pay at least some money for it. Some estimates range higher than US$9,000,000 of revenue generated by In Rainbows for the month of October alone, but the band itself isn't talking. For the sake of argument, I'll just assume that comScore is working with a statistically significant sample, and has arrived at roughly accurate results. Any following statistics, as with those in the previous paragraph, are based on comScore's numbers. Average payment per download, for all those "freeloaders" and paying customers, comes out to over two British pounds, with about a 52% higher average for US downloaders than those elsewhere in the world. Considering that it costs Radiohead effectively nothing per person who downloads it for free, every single dollar beyond the basic costs for producing the album and the infrastructure to offer it as downloads is pure profit. Of course, there are people, most of whom have a vested interest in maintaining the status quo in the record industry, who see this all as some inescapable portent of doom. As quoted in the comScore report, the CEO of TAXI (one of the world's most prominent independent A&R companies) said "Radiohead has been bankrolled by their former label for the last 15 years. They've built a fan base in the millions with their label, and now they're able to cash in on that fan base with none of the income or profit going to the label this time around. That's great for the band and for fans who paid less than they would under the old school model. But at some point in the not too distant future, the music industry will run out of artists who have had major label support in helping them build a huge fan base. The question is: how will new artists be able to use this model in the future if they haven't built a fan base in the millions in the years leading up to the release of their album under the pay what you'd like model?" Of course, the obvious answer to this is that artists will be able to build their fan base by doing exactly what Radiohead is doing -- and the more people value their music as it becomes more popular, the more money it will make for the band. It would at least in theory be an inexorable, organic growth of revenue for any band that is good enough or appealing enough to warrant increasing popularity and income. It's like a guaranteed raise every year, assuming you're actually worth the money you get when you receive your raise, but without the uncertainties of office politics getting in the way. In theory, theory and practice are the same thing. In practice, things are rarely that simple. Only time will tell whose interpretation of events will hold true in the long run, whose hopes or fears will be most relevant to the future of the record industry. One thing is certain, however: the better Radiohead's business model experiment goes, the worse the implications for any corporations and industry associations whose business model prompts them to use measures like DRM software to centralize control over content distribution.
What does this have to do with security?The entire rest of the article up to this point was, in effect, laying the ground work for a single, simple point. That point is that security is, among other things, a matter of picking your battles well. There are some things that just cannot be protected in the long run and ultimately, if your business model depends on protecting such things, either your business model will change or your business will fail. It's really that simple. Radiohead is demonstrating a desire and ability to take chances on new business models when the band sees what appears to be the writing on the wall with regard to the demise of the record industry's traditional business model. Ironically, this fantastic new business model isn't new at all. It's more like a return to what may be the oldest musician's business model known to man, where the musician plays music and listeners who like what they hear drop money in his hat. Such a return to old form would make the RIAA's model a recent aberration based on duplication aspects of the technology temporarily leaping ahead of the distribution aspects. Reaching the point we see now, where duplication, distribution, and even playback have become almost indistinguishable applications of technology, we discover that centralized control of distribution of copyrightable works may fall into the category of things we just can't protect in the long run. Microsoft is not the only content and software vendor in the world whose entire business model inherently depends on protecting centralized control of distribution. I could have as easily used Sony as my example, considering the faux pas Sony/BMG has made with DRM lately. I need to pick an example, however, to make a point, and I've chosen Microsoft. Critically acclaimed, internationally successful band Radiohead has apparently learned the lesson that selling the product of the intellect as though it were a physical commodity that cannot be reproduced outside of the record industry is an unsustainable practice, a business model that cannot be protected for long, and has begun pursuing other means of making a living from the same process of creation. Meanwhile, internationally successful software vendor Microsoft has reacted to similar circumstances and lessons in the software industry by trying desperately to tighten control of its empire, including ever more DRM software with its offerings both for the protection of its own software's restricted distribution business model, as well as for software and content provided by its business partners. Maybe Microsoft has a long term plan that involves ultimately changing its business model to leverage the market forces that exist regardless of centralized control of distribution, and its current protectionist tactics are only a holding action until the corporation can make the transition. Maybe almost every technologist with a meaningful understanding of the nature of bits, of the basics of information technology, is simply wrong about the ultimate impossibility of maintaining centralized control of distribution for any product of the intellect once it is recorded. From what I can see, though, it looks more like Radiohead knows more than Microsoft about a fundamental principle of security -- that a necessity of successful security practice is recognizing the difference between what can be effectively protected and what can't. It's a principle that applies just as well to the security of your business model as to the integrity of your network.
What did I pay?I've never been Radiohead's biggest fan, but in my opinion their music is far better than most of what I hear on the radio. I figured it was fair to pay about three British pounds. It was worth every penny.
By Chad Perrin | November 15, 2007, 2:11 PM PST
In the words of Wikipedia's article on pseudorandomness at the time of this writing, "a pseudorandom process is a process that appears to be random but is not." In programming, the term "pseudorandom" is most often used to refer to number generators that produce numbers with the statistical appearance of randomness (that is, the chance of any given number being generated is roughly the same as if the number generation were truly random), but an algorithm is being used to generate the numbers such that numbers can be duplicated by someone else using the same algorithm (and the same parameters). Since numbers of varying degrees of randomness are so important to certain types of security procedures, most notably those involving encryption, the ability of some unauthorized individual to duplicate your number generator's results could constitute a severe security vulnerability. If your network's security relies on encryption -- and many do, especially if they're wireless networks -- a poor pseudorandom number generator behind the encryption scheme is something that might keep you up at night, and with good reason. This doesn't mean you should only use systems that employ "true" random number generators. The only reason encryption systems tend to use pseudorandom number generators is that, at the current level of technology, there are no "true" software-based random number generators. You just need a generator with a sufficiently high period (the number of generated numbers that occur before the pattern repeats itself) and the introduction of some factor to the generation of pseudorandom numbers that actually is random. In most cases, this random input is used to generate a "seed". That seed is used as the start point for generating a pseudorandom number, and it becomes significantly difficult to duplicate the operation of the pseudorandom number generator without having the same seed -- effectively impossible, in fact. The importance of your number generator's period is then to keep the pseudorandom numbers generated from repeating themselves quickly enough that a brute force attack will work. Encryption systems have been developed over the years that actually rely on sharing the seed between parties or nodes that need to be able to encrypt and decrypt each others' missives. Other encryption systems have been developed that had so short a period they were worse than useless -- because "useless" would be something that just didn't work, but "worse than useless" is something that makes you feel secure, so that within its "protection" you say things you wouldn't say in public, but someone else is listening in without your knowledge. An example of such a system was the World War II Japanese cipher codenamed Purple by the United States cryptanalysts who repeatedly cracked the encryption keys used by that system. There are other matters to keep in mind when developing a pseudorandom number generator algorithm (PRNG), too. For instance, the difficulty of determining previous generated numbers when all you have cracked is the current state of the PRNG should be quite high, at least in cases where the number generator is used as part of an encryption system. In systems where the number generator algorithm does not make it sufficiently difficult, a malicious security cracker can collect encrypted messages, such as your credit card numbers sent over SSL connections to e-commerce Websites, and work on determining the current state of the PRNG. Once the security cracker has that information, he might then attempt to trace previously generated numbers from that point, which would lead to the ability to recreate encryption keys used on those stored encrypted messages, and decipher them at leisure. Some encryption systems protect you against that eventuality, where cracking the current key does not substantively help you crack past or future keys. This protection is called by some cryptographers "perfect forward secrecy". One example of an encryption system that provides perfect forward secrecy is the OTR Messaging library, used in encryption plugins for applications like the Pidgin multiprotocol IM client (the core functionality of which is coincidentally provided by a library called "libpurple"). Some encryption systems do not provide perfect forward secrecy. In fact, some systems that people have been trusting with their private communications and data for years provide quite deeply flawed forward secrecy. One example of an encryption system that utterly fails in the perfect forward secrecy arena is the Microsoft Windows PRNG, which was reverse engineered by Israeli researchers who demonstrated how easy it is to determine past and future encryption keys produced by the MS Windows integrated encryption systems if you know the current state of the PRNG. Encryption is one of those technology areas where anyone that knows anything about the field these days knows that there is no such thing as real security through obscurity. Trying to protect your security by hiding your procedures is a losing proposition, because the technology is easily reverse engineered to determine what those procedures are. Thus, your best bet for developing and maintaining secure encryption systems is security through visibility: develop a system that does not depend on the inner workings of the system itself remaining secret to work properly. All it takes is a single leak, or a single case of someone successfully reverse engineering the procedures embodied in your algorithms, to destroy your entire illusion of secrecy. It is in part for such reasons that open source encryption systems like OTR and OpenSSH have remained useful, strong protection for your privacy. When you open up your process to peer review, it can be fixed and improved based on the feedback you receive. When you do not, the only people who will be reviewing it are the people who want to circumvent the protections you have put in place -- and it is not in their best interest to give you feedback at all. Unfortunately for organizations whose software development and business model are based on secrecy for their procedures and processes, you have to trust your user base with those procedures and processes to get the kind of feedback you need to improve them. In matters of security, as in other matters, trust is a two-way street. This is not a new principle of encryption security. Auguste Kerckhoffs elaborated on this principle in the 19th century, with what came to be known as Kerckhoffs' Principle -- that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge. It was later reformulated by Claude Shannon, the father of information theory, to produce what is generally known as Shannon's Maxim, a rather more alarming statement that has (almost) exactly the same implications: The enemy knows the system. With that assumption in mind, you can dispense with the bread and circuses of "security through obscurity" and focus on the matter of trust. Would you rather trust your friends and customers, who are inclined to help you keep your encryption system secure, or would you rather keep them in the dark without any reasonable expectation that the "enemy" will be likewise hampered? I know which group I would rather have reviewing my security technologies. Which would you prefer?
By Mike Mullins | November 15, 2007, 1:01 PM PST
Most serious attackers aren't going to advertise their intentions by performing a broad scan -- the smartest attackers will try to come in under your detection radar. Learn why attackers prefer slow scanning, learn about the tools they use, and find out how to defend against this low-and-slow approach.
By Paul Mah | November 14, 2007, 9:58 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers a new social-engineering trick based on YouTube, a vulnerability in the Net::HTTPS module of the Ruby Scripting language, and news that the URI security vulnerability has finally been fixed by Microsoft in November's Patch Tuesday.
By Paul Mah | November 13, 2007, 7:50 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers a new firmware update for the iPhone and iPod Touch, a new version of Miranda IM that fixes certain security issues, and a privilege escalation vulnerability in WinPcap.
By Paul Mah | November 12, 2007, 10:01 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers the release of PHP 5.2.5, multiple vulnerabilities discovered in phpMyAdmin, and various security updates released by SUSE.
By Paul Mah | November 9, 2007, 11:59 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers vulnerabilities discovered in Sun Solaris, the availability of official documentation from Apple on Leopard's firewall, and multiple overflow vulnerabilities inan ActiveX control associated with AOL Radio.
By Paul Mah | November 8, 2007, 10:26 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers the availability of a hotfix and patch for vulnerabilities in Plone CMS and Xpdf respectively, and a remotely exploitable vulnerability in SSReader ActiveX control.
By Paul Mah | November 7, 2007, 10:50 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers a priviledge escalation vulnerability in Microsoft's DebugView, a buffer overflow flaw in Oracle 10g R2, and also information on how the firewall in Mac OS X Leopard can break some programs.
By Chad Perrin | November 7, 2007, 10:02 AM PST
As I pointed out on 19 October, in point number four of the article 10 security tips for all general-purposes OSes, an important step in the process of securing your system is to shut down unnecessary services. As long as Microsoft Windows has been a network capable operating system, it has come with quite a few services turned on by default, and it is a good idea for the security conscious user of Microsoft's flagship product to shut down any of these that he or she isn't using. Each version of MS Windows provides different services, of course, so any list of services to disable for security purposes will be at least somewhat particular to a given version of Microsoft Windows. As such, a list like this one needs to be identified with a specific Microsoft Windows version, though it can still serve as a guide for the knowledgeable MS Windows user to check out the running services on other versions as well. If you are running Microsoft Windows XP on your desktop system, consider turning off the following services. You may be surprised by what is running without your knowledge.
- IIS -- Microsoft's Internet Information Services provide the capabilities of a Webserver for your computer.
- NetMeeting Remote Desktop Sharing -- NetMeeting is primarily a VoIP and videoconferencing client for Microsoft Windows, but this service in particular is necessary to remote desktop access.
- Remote Desktop Help Session Manager -- This service is used by the Remote Assistance feature that you can use to allow others remote access to the system to help you troubleshoot problems.
- Remote Registry -- The capabilities provided by the Remote Registry service are frightening to consider from a security perspective. They allow remote users (in theory, only under controlled circumstances) to edit the Windows Registry.
- Routing and Remote Access -- This service bundles a number of capabilities together, capabilities that most system administrators would probably agree should be provided separately. It is rare that any of them should be necessary for a typical desktop system such as Microsoft Windows XP, however, so they can all conveniently be turned off as a single service. Routing and Remote Access provides the ability to use the system as a router and NAT device, as a dialup access gateway, and a VPN server.
- Simple File Sharing -- When a computer is not a part of a Microsoft Windows Domain, it is assumed by the default settings that any and all filesystem shares are meant to be universally accessible. In the real world, however, we should only want to provide shares to very specific, authorized users. As such, Simple File Sharing, which only provides blanket access to shares without exceptions, is not what we want to use for sharing filesystem resources. It is active by default on both MS Windows XP Professional and MS Windows XP Home editions. Unfortunately, this cannot be disabled on MS Windows XP Home. On MS Windows XP Professional, however, you can disable it by opening My Computer -> Tools -> Folder Options, clicking the View tab, and unchecking the Use simple file sharing (Recommended) checkbox in the Advanced settings: pane.
- SSDP Discovery Service -- This service is used to discover UPnP devices on your network, and is required for the Universal Plug and Play Device Host service (see below) to operate.
- Telnet -- The Telnet service is a very old mechanism for providing remote access to a computer, most commonly known from its use in the bad ol' days of security for remote command shell access on Unix servers. These days, using Telnet to remotely manage a Unix system may be grounds for firing, where an encrypted protocol such as SSH should be used instead.
- Universal Plug and Play Device Host -- Once you have your "Plug and Play" devices installed on your system, it is often the case that you will not need this service again.
- Windows Messenger Service -- Listed in the Services window under the name Messenger, the Windows Messenger Service provides "net send" and "Alerter" functionality. It is unrelated to the Windows Messenger instant messaging client, and is not necessary to use the Windows Messenger IM network.
By Paul Mah | November 6, 2007, 10:04 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers an escalation of priviledge vulnerability found in the Macrovision driver on Windows, a new release of Apple's QuickTime which fixes seven critical vulnerabilities, and vulnerabilities discovered in Perl and its Regular Expressions library.
By Paul Mah | November 5, 2007, 9:52 PM PST
Here's a collection of recent security vulnerabilities and alerts, which covers a local escalation of priviledge in Symantec Antivirus for Mac, vulnerabilities discovered in ACDSee, and a vulnerability found in IPSwitch e-mail client - which comes bundled with theIPSwitch IMail Server for Windows.
By Paul Mah | November 1, 2007, 10:38 PM PDT
Here's a collection of recent security vulnerabilities and alerts, which cover a vulnerability discovered in Novell's BorderManager 3.8 Client Trust, a memory corruption vulnerability in CUPS, and a new Mac Trojan that masquerades as a video codec for watching pornography.
By Mike Mullins | November 1, 2007, 6:24 AM PDT
Internet Information Services (IIS) continues to be a favorite target for hackers. Make their job harder by moving IIS log files to a secure remote location.
By Paul Mah | October 31, 2007, 11:59 PM PDT
Here's a collection of recent security vulnerabilities and alerts, which covers the release of Wordprses 2.3.1 which is a bug-fix and security release, multiple vulnerabilities in AIX, and a code injection vulnerability discovered in McAfee E-Business Server.