Feedback and/or contributions to make this better are appreciated and welcome
Highlighted quotes of the week:
“OWASP top 10 is in danger of becoming the pci of the app layer. it’s not enough” – Gal Shpantzer
“Remember deceivers tend to actually engage in greater eye contact not less. The myth of looking away to lie is, nonscientific tripe.” – Joe Navarro
(older quote) “At an international army event, I just put my USB stick in a presentation PC all the Generals were using. Got it back with an Autorun worm.” – Mikko Hypponen
To view the full security news for this week please click here (Divided in categories, There is a category index at the top): The categories this week include (please click to go directly to what you care about): Hacking Incidents / Cybercrime, Unpatched Vulnerabilities, Software Updates, Business Case for Security, Web Technologies, Network Security, Mobile Security, Privacy, Censorship, Tools, General, Outrageous, Funny,
Highlighted news items of the week (No categories):
Not patched: McAfee Security Bulletin Released
Patched: Sun security updates, VMWare Security Advisory, Winamp plug-in backdoor wide open to viral penetration
A few people have asked me about what they should do regarding business continuity as a result of the recent heavy snow falls. I have pointed many of them
to the excellent business continuity plan template that the Department of Enterprise Trade and Employment published recently for the H1N1 flu virus and which
is equally applicable to the current weather conditions.
So why not take a look at your own organisation and try and figure out what would you need to have in place should some of your key staff be unable to get to
their place of work? Some key questions to ponder;
* How many concurrent remote users can your VPN support?
* If a large number of staff were to try to work from home on the same day would the VPN be able to cope with the traffic?
* Should you have a VIP VPN that can only be used by those staff in such scenarios?
* Do your staff have work laptops or PCs to work from home? If not how will you secure any data they may have on them while working from home?
* Can staff use alternative mean to meet with clients such as online conferences or conference call facilities?
– Attachment: Threats often send attachments that are infected. We need to make end users aware of these attacks, that attackers send emails that build
trust with the victim, then fool them into clicking on the attachment. The behavior we need to change is to get people to think before opening attachments.
Was the attachment expected? If not sure, contact the sender or forward the email to your security team.
– Links: These attacks work by fooling end users clicking on a link. The link then sends the user to a phishing site, a drive by attacking site, or has
them download and open an infected file (such as .pdf ). The behavior we need to change is to get people to think before clicking on links. Was the link
expected? If not sure, contact the sender or forward the email to your security team.
– Scams: These attacks fool people out of their information or money by simply asking for it (the classic lottery attack). The behavior we need to change
is if something sounds too good to be true, it probably is.
– Spear Phishing: For many high value organizations, they can be targeted or singled out by specific attackers. End users need to understand that attacks
can be customized specific to them and their organization.
Many of our customers have committed the time and effort to become compliant with the ISO/IEC ISO27001:2005 standard. Following the 'resource intensive'
phase of the Information Security Management System (ISMS) implementation, it is of course crucial to review and attenuate all aspects of this system at
As with other management standards, the Plan, Do, Check, Act cycle mandates a process of continual assessment and improvement to the information security of
Secure Network Administration highlights of the week (please remember this and more news related to this category can be found here: Network Security):
Two years ago I published a table of Vulnerability and threat mitigation features in Red Hat Enterprise Linux and Fedora. Now that we've released Red Hat
Enterprise Linux 6, it's time to update the table. Thanks to Eugene Teo for collating this information.
Between releases there are lots of changes made to improve security and we've not listed everything; just a high-level overview of the things we think are
most interesting that help mitigate security risk. We could go into much more detail, breaking out the number of daemons covered by the SELinux default
policy, the number of binaries compiled PIE, and so on.
Note that this table is for the most common architectures, x86 and x86_64 only; other supported architectures may vary.
Sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream While in some ways similar to an editor which
permits scripted edits (such as ed), sed works by making only one pass over the input(s), and is consequently more efficient. But it is sed's ability to
filter text in a pipeline which particularly distinguishes it from other types of editors.
Here are the top 25 SED commands voted by everyone
If you like monitoring, you might want to receive notifications at every (or only root) login, in addition to logs.
/etc/profile, bashrc, etc.
One can first think of a script in /etc/profile – I saw that solution on many websites – but it is wrong because the user can connect with ssh /bin/sh and it
will not run any login script. Also, this kind of login does not appear in last/wtmp but only in auth.log by sshd (because it's not considered as an
Second solution is to parse the auth.log – for instance with SEC, Simple Event Correlator – and use a notify script. It should work, yet I prefer the third
I received an email from a private mailing list recently, asking for some help in reviewing the contents of a packet capture file:
"I have a 2.5 GB pcap file which I want to verify that it contains only encrypted content. […] I'm wondering if anyone knows of a way that I can accomplish
this using Windump or some other Windows utility."
This kind of analysis happens frequently when performing a black-box pentest against a protocol. Over the years I've used a couple of techniques to evaluate
the content of packet captures to determine if the traffic is encrypted or just obfuscated.
Secure Development highlights of the week (please remember this and more news related to this category can be found here: Web Technologies):
There are a million ways to do a code review, but there's one component that's common to all-a single developer critically examining code. So it makes sense
to ask what the data tells us about developers diving into code by themselves, without distraction and without discussion.
What are the best practices during quiet contemplation?
How much time should you spend on a code review in one sitting?
Figure 18-1 maps how efficient we are at finding defects over time [Dunsmore 2000]. The horizontal axis plots time passing; the vertical is the average
number of defects a person found at this time index.
Figure 18-1. After 60‒90 minutes, our ability to find defects drops off precipitously
Here are a couple of things I either know or assume are true:
1. HTTPS is better than HTTP for the safekeeping of cookies. Again not that they can't be stolen, but that it takes a more concerted effort, and network
sniffing (ex. 'Firesheeping') is rendered useless.
2. The SECURE cookie attribute, along with SSL, prevents the cookie from crossing encryption boundaries. This is fine, but wouldn't a modern browser
typically warn a user that something outside the encrypted session was requesting information?
3. The HTTPOnly cookie directive prevents scripts from seeing cookie contents. This can help to mitigate some XSS exploits, but not all. Also, not every
browser supports this directive, so you have to deal with that.
4. Encrypting a cookie doesn't really solve anything but to make the contents unreadable on an unencrypted connection. Consequently, outside of SSL, unless
there's some unique mechanism to both share/transmit a secret key to/with a specific user/browser, and a way for the browser to utilize that key for cookie
encryption, then stealing an encrypted cookie is as good as stealing a plain text cookie in terms of session high jacking. Any clientside encryption and/or
hashing mechanisms are essentially useless as anything on the client side can be manipulated
This post is long overdue. I will cover the current state of exception handling options within both ModSecurity and the OWASP Core Rule Set (CRS).
Exception Handling Methodologies
Before continuing with this blog post, I highly recommend that you review the blog post describing the Traditional vs. Anomaly Scoring Detection Modes. Your
detection operating mode will directly impact your exception handling options.
False Positives and WAFs
It is inevitable; you will run into some False Positives (FP) when using web application firewalls. This is not something that is unique to ModSecurity. All
web application firewalls will generate some level of false positives, especially when you first deploy them or when your application changes. Continuous
application profiling, where the WAF learns expected behavior helps to reduce FPs however negative security model (blacklist) rule sets will always generate
some level of FP as they have no idea what input is valid. The following information will help to guide you through the process of identifying, fixing,
implementing and testing new exceptions to address false positives within the OWASP ModSecurity CRS.
Category: Vulnerability Writeups / Tag: csrf, google, google calendar, google vulnerability reward program, security / Add Comment
Google Calendar was vulnerable to a series of CSRF vulnerabilities. In two separate instances, I found that existing countermeasures (CSRF tokens) were not
being validated by the application.
In the first instance, I found it was possible to add an arbitrary event to a user's calendar. I used Google Calendar's "quick add" feature: it allows users
to click on a space on the calendar and type in the name of an event, which adds it to the calendar. By monitoring the HTTP traffic between my browser and
Google, I determined that the calendar entry was being created by a GET request that looked something like this (I've broken up the URL for the sake of
Finally, I leave you with the secure development featured article of the week courtesy of OWASP (Development Guide Series):
Applications without security architecture are as bridges constructed without finite element analysis and wind tunnel testing. Sure, they look like bridges, but they will fall down at the first flutter of a butterfly’s wings. The need for application security in the form of security architecture is every bit as great as in building or bridge construction.
Application architects are responsible for constructing their design to adequately cover risks from both typical usage, and from extreme attack. Bridge designers need to cope with a certain amount of cars and foot traffic but also cyclonic winds, earthquake, fire, traffic incidents, and flooding. Application designers must cope with extreme events, such as brute force or injection attacks, and fraud. The risks for application designers are well known. The days of “we didn’t know” are long gone. Security is now expected, not an expensive add-on or simply left out.
Security architecture refers to the fundamental pillars: the application must provide controls to protect the confidentiality of information, integrity of data, and provide access to the data when it is required (availability) – and only to the right users. Security architecture is not “markitecture”, where a cornucopia of security products are tossed together and called a “solution”, but a carefully considered set of features, controls, safer processes, and default security posture.
When starting a new application or re-factoring an existing application, you should consider each functional feature, and consider:
Is the process surrounding this feature as safe as possible? In other words, is this a flawed process?
If I were evil, how would I abuse this feature?
Is the feature required to be on by default? If so, are there limits or options that could help reduce the risk from this feature?
Andrew van der Stock calls the above process “Thinking Evil™”, and recommends putting yourself in the shoes of the attacker and thinking through all the possible ways you can abuse each and every feature, by considering the three core pillars and using the STRIDE model in turn.
By following this guide, and using the STRIDE / DREAD threat risk modeling discussed here and in Howard and LeBlanc’s book, you will be well on your way to formally adopting a security architecture for your applications.
The best system architecture designs and detailed design documents contain security discussion in each and every feature, how the risks are going to be mitigated, and what was actually done during coding.
Security architecture starts on the day the business requirements are modeled, and never finishes until the last copy of your application is decommissioned. Security is a life-long process, not a one shot accident.
Have a great week and weekend.