Feedback and/or contributions to make this better are appreciated and welcome
Highlighted quotes of the week:
“Any reliance on a generic scanning tool as your primary security control is nothing more than a false sense of security and a disaster waiting to happen. ” – Michael Coates
“Instead of asking why Gawker leaked all those passwords, why aren’t we asking why everyone still has to use such a broken tech?” – Dan Kaminski
“Protip: IP Spoofed attacks are sort of dead, because you can’t spoof TCP protos like HTTP, and because NAT interferes” – Dan Kaminski
“I wouldnt say bounties replace pen-testing services, but can help take another slice of vulns out for a nominal price.” – Jeremiah Grossman
“cost 4 the hacker should b measured in $$ and man-(hours|days|weeks) only way to know the true cost.” – Josh Abraham
“Wonderful. A (non-household) family member clicked on a phishing link. Now i have to figure out what malware it was and how to clean.” – Rich Mogull
“PCI DSS is about protecting cardholder data, not about protecting Service Availability. MasterCards DDoS take down has nothing to do with PCI” – Dave Whitelegg
“As the AppSec industry matures & scales, automating vuln management will be essential. HoneyApps pointing to the future.” – at Bay Threat conference
To view the full security news for this week please click here (Divided in categories, There is a category index at the top): The categories this week include (please click to go directly to what you care about): Hacking Incidents / Cybercrime, Unpatched Vulnerabilities, Software Updates, Business Case for Security, Web Technologies, Network Security, Mobile Security, Cryptography / Encryption, Privacy, General, Tools, Funny
Highlighted news items of the week (No categories):
Not patched: New Remotely Exploitable Bug Found in Internet Explorer, Backdoor Vulnerability Discovered on HP MSA2000 Storage Systems, Allegations regarding OpenBSD IPSEC
Updated/Patched: December 2010 Microsoft Black Tuesday Summary, December 2010 Security Bulletin Release, Over 500 patches for SAP, Debian and Red Hat close Exim hole, Vulnerability in the PDF distiller of the BlackBerry Attachment Service for the BlackBerry Enterprise Server, Mozilla releases Firefox & Thunderbird security updates, Google issues security update for Chrome 8, New version of OpenSSL fixes two vulnerabilities, WordPress 3.0.3 security update released, Java 6, Update 23 is out, Apple updates QuickTime for Windows and Mac OS X 10.5, Stability and security update for Office 2008 for Mac, Overdue patches published for RealPlayer
From the depths of the twitterverse, here is an example of the Kübler-Ross model of infosec, a Hamster Wheel of Pain where you get to play the Hamster
Everyone sounded the alarms at the Gawker Media attack, which included a security breach of websites such as Gizmodo, Lifehacker, Kotaku, io9, and others. The numbers were impressive: 1.3 million user accounts exposed, 405 megabytes of source code lost, and perhaps more important to some, the identity of those leaving anonymous comments potentially revealed. For Gawker, there is a loss of trust that will be difficult to regain. Users are already clamoring for the ability to delete their accounts. And, on the technical side, all Gawker’s systems will need to painstakingly audited or rebuilt entirely from scratch to prevent the same thing from happening again. Happy Holidays indeed.
So, what is to be learned from this perfect storm of bluster and bravado? Many lessons, most of them demonstrating what not to do.
1. First and foremost, DO NOT poke the bear. By taunting the hacker community, especially the vigilante types, Gawker made itself a target unnecessarily. Never claim to be “unhackable.” The hackers outnumber you by several orders of magnitude, and they have more free time. Respect their capabilities. Not to mention the odds are always stacked against defenders. The attackers only have to find one little crack in wall to bring the castle crumbling down.
Repelling a hacker attack can be costly as PayPal, Visa and MasterCard undoubtedly found out last week as they tried – with mixed success – to keep their Web sites from being knocked offline by supporters of Wikileaks.
How much money exactly? An unrelated attack several years earlier on Google may provide some insight.
In 2005 Google was battling the Santy worm, a bit of malicious software that caused infected computers across the globe to automatically enter search queries – so many, in fact, that Google was overwhelmed. Details of the episode are chronicled in internal F.B.I. memos obtained by The New York Times through a Freedom of Information Act request.
Secure use of passwords are critical, they are the keys to the kingdom. If an individual’s or organization’s password is compromised, then an attacker can access everything they are trying to protect. In addition, an attacker can then impersonate the victim and gain access to other resources. As such, password best practices are something many organizations focus on. Here are what I consider some of the key learning objectives for awareness, but in addition some learning objectives that I feel are overblown.
– Complexity: One of the first things every organization focus on is password complexity. I see organizations moving to 12 character passwords with one CAPITAL, one number, one symbol, and changed every ninety days. In a previous blog post I argue this is overkill, we are doing more harm and good. While some complexity is important, I feel organizations will have far greater impact and reduce more risk following these practices.
– Sharing: Often employees feel comfortable sharing passwords with other employees or supervisors. This is a dangerous practice. First, you lose accountability, you cannot track who did what because people have shared accounts. In addition, once a password is shared it may become more shared then expected, including with unethical employees.
– Dual Use: Many users will use the same password for all their accounts.
…
The Open Source Security Testing Methodology Manual 3.0 covering security testing, security analysis, operational security metrics, trust analysis, operational trust metrics, and the tactics required to define and build the best possible security over Physical, Data Network, Wireless, Telecommunications, and Human channels
Security vendors realize that application security is the next great security frontier and have begun creating new products with their old scan and report approach. Unfortunately we’ve conditioned ourselves to accept that this antiquated approach works for all security issues. It worked for networks right? But the reality is that this approach can only catch the most common issues in web applications. (Read: Most Web Application Scanners Missed Nearly Half Of Vulnerabilities) All of the deeper (and more critical issues) are going to require custom testing to detect. This is the problem. People buy web scanning tools believing the tools is easy to use and that it will result in secure web applications. The reality is that the tool is easy to start up, difficult and time consuming to interpret the results, and will likely only find some common security issues in the application.
Before reading the following, ask yourself if you’d recommend to the average user that they store their passwords in a local password manager.
Today there are four primary ways users lose control over their web-based passwords. Phishing Scams (email or SEO), Malware (installing malware or drive-by-downloads), website break-ins (SQLi, RFI, misconfiguration, etc.), and website brute-force attacks. For a user to protect themselves I’ve outlined the client-side technologies they can deploy (reason MFA is left out) and possible changes in their online behavior.
Secure Network Administration highlights of the week (please remember this and more news related to this category can be found here: Network Security):
This is the ‘Release candidate’ version of the paper, should no errors be found it will be the final version.
This paper aims at answering the following questions :
* What SSL/TLS configuration is state of the art and considered secure (enough) for the next years?
* What SSL/TLS ciphers do modern browsers support ?
* What SSL/TLS settings do server and common SSL providers support ?
* What are the cipher suites offering most compatibility and security ?
* Should we really disable SSLv2 ? What about legacy browsers ?
* How long does RSA still stand a chance ?
* What are the recommended hashes,ciphers for the next years to come
My Linux servers are all protected by a local iptables firewall. This is an excellent firewall which implements all the core features that we are expecting from a decent firewall system. Except… logging and reporting! By default, iptables send its logs using the kernel logging facilities. Those can be intercepted by common Syslog daemons: Events are collected and stored in a flat file. Note that some Syslog implementations, like rsyslog, have a built-in mechanism to store logs into a MySQL database. But, messages are stored “as is” without processing or normalization; this makes them difficult to use. Of course, solutions exists to parse Syslog flat files and generate firewalls stats (have a look at fwlogwatch) but I’m looking for something more “visual”. Visibility is a key point!
Sensible additions to your virus scanner
Good behaviour recognition is an important component that is often missing from free anti-virus software. In good commercial products such as Norton or Kaspersky, behaviour analysis is a last and very efficient line of defence, as it monitors and evaluates program activities.
If there is an increase in suspicious activity, for instance because a program immortalises itself in the registry, records keyboard inputs and links itself into the browser’s encrypted communication, there is a likelihood that a trojan is at work. In such cases, the behaviour monitor will intervene and, ideally, even prevent system manipulations.
Using a free anti-virus program doesn’t mean you have to go without this added protection. PC-Tools offers ThreatFire, a free, dedicated behaviour recognition program designed to be installed alongside a conventional anti-virus program
1) Like top, but for files
watch -d -n 2 ‘df; ls -FlAt;’
2) Download an entire website
wget -random-wait -r -p -e robots=off -U mozilla http://www.example.com
-p parameter tells wget to include all files, including images.
-e robots=off you don’t want wget to obey by the robots.txt file
-U mozilla as your browsers identity.
-random-wait to let wget chose a random number of seconds to wait, avoid get into black list.
All versions of Microsoft Windows allow real-time modifications to the
Security Accounts Manager (SAM) that enable an attacker to create a
hidden administrative backdoor account for continued access once a
system has been compromised. Once an attacker has compromised a
Microsoft Windows computer system using any method, they can either
leave behind a regular user or hijack a known user account (Such as
ASPNET). This user account will now have all of the rights of the
built-in local administrator account from local or remote connections.
Akamai often finds itself scrambling to stop a DDOS attack against one or more of its clients
Google. Twitter. Government websites. Fortune 500 companies. All have been victims of crippling distributed denial-of-service (DDoS) attacks. The attacks have grown in reach and intensity thanks to botnets and a bounty of application flaws. And Akamai Technologies has a seen it all firsthand.
Many people use Akamai services without even realizing it. The company runs a global platform with thousands of servers that customers rely on to do business online. The company currently handles tens of billions of daily Web interactions for such companies as Audi, Fujitsu and NBC, and organizations like the Department of Defense and Nasdaq. There’s rarely a moment-if there are any-when an Akamai customer is not under the DDoS gun.
This was one of the newer topics that I covered at BlackHat Abu Dhabi. HTML5 has two APIs for making cross domain calls – Cross Origin Requests and WebSockets. By using them JavaScript can make connections to any IP and to any port(apart from blocked ports), making them ideal candidates for port scanning.
Both the APIs have the ‘readyState’ property that indicates the status of the connection at a given time. The time duration for which a specific readyState value lasts has been found to vary based on the status of the target port to which the connection is being made. This means that by observing this difference in behavior we can determine if the port being connected to is open, closed or filtered. For Cross Origin Requests it is the duration of readyState 1 and for WebSockets it is readyState 0.
Secure Development highlights of the week (please remember this and more news related to this category can be found here: Web Technologies):
To secure a website or a web application, one has to first understand the target application, how it works and the scope behind it. Ideally, the penetration tester should have some basic knowledge of programming and scripting languages, and also web security. A website security audit usually consists of two steps. Most of the time, the first step usually is to launch an automated scan. Afterwards, depending on the results and the website’s complexity, a manual penetration test follows. To properly complete both the automated and manual audits, a number of tools are available, to simplify the process and make it efficient from the business point of view.
In this white paper we explain in detail how to do a complete website security audit and focus on using the right approach and tools. We describe the whole process of securing a website in an easy to read step by step format; what needs to be done prior to launching an automated website vulnerability scan up till the manual penetration testing phase.
We have a number of different ModSecurity Demonstration projects hosted on the ModSecurity site.
ModSecurity/PHPIDS Evasion Testing Demo
The ModSecurity Demo is a joint effort between the ModSecurity and PHPIDS project teams to allow users to test ModSecurity and PHPIDS. Any data is sent to a ModSecurity install for inspection by the CRS and then it will be proxied to the PHPIDS page for normal inspection and processing. The response body will then be inspected to confirm if there are any evasion issues between the CRS and PHPIDS.
XSS Mitigation with Content Injection Demo
This demo shows how to use ModSecurity’s Content Injection capabilities to prepend defensive JavaScript to the top of the returned page, which will protect against unauthorized JS execution.
Last night I received an urgent message from a client. My machine has been hacked, someone got into the admin area, I need all of the details from this IP.
So, I grepped the logs, grabbed the appropriate entries and saw something odd.
1.2.3.4 – – [09/Dec/2010:22:15:41 -0500] ‘GETS /admin/index.php HTTP/1.1’ 200 3505 ‘-‘ ‘Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12’
1.2.3.4 – – [09/Dec/2010:22:17:09 -0500] ‘GETS /admin/usermanagement.php HTTP/1.1’ 200 99320 ‘-‘ ‘Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12’
1.2.3.4 – – [09/Dec/2010:22:18:05 -0500] ‘GETS /admin/index.php HTTP/1.1’ 200 3510 ‘-‘ ‘Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12’
A modified snippet of the .htaccess file:
AuthUserFile .htpasswd
AuthName ‘Protected Area’
AuthType Basic
require valid-user
Of course, we know GETS isn’t valid, but, why is Apache handing out status 200s and content lengths that appear to be valid? We know the area was password protected behind .htaccess and with some quick keyboard work we’ve got a system that properly prompts for Basic Authentication with a properly formed HTTP/1.0 request. Removing the restriction from the .htaccess protects the site, but, why are these other methods able to pass through? Replacing GETS with anything other than POST, PUT, DELETE, TRACK, TRACE, OPTIONS, HEAD results in Apache treating those requests as if GET had been typed.
Let’s assume, dear Web surfer, that I can get you to visit a Web page I control. Just like the page on my blog you’re reading right now. Once you do, by nature of the way the Web works, near complete control of your Web browser is transferred to me as long as you are here. I can invisibly force your browser to initiate online bank wire transfers, post offensive message board comments, vote Jullian Assange as Times Person of the Year, upload illegal material, hack other websites and essentially whatever else I can think up. Worse still, on the receiving end, all the logs will point back to you. Not me.
This week thousands of system administrators who make use of Goolge products will open their inbox to see an email from Google explaining that their Web Optimizer product contains an Cross-site scripting flaw that allows hackers to inject scripts into their Google Optimized web pages.
…
I have documented my research in this article, and I hope that it will be of use to you. There is a lot to learn from other people’s mistakes, especially when those people are Google themselves.
The flaw exists in Googles Web Optimizer, which is a series of scripts that web administrators use to gain insight into how their web sites are navigated by online customers.
Below is a segment of the the flawed code.
Finally, I leave you with the secure development featured article of the week courtesy of OWASP (Development Guide Series):
Secure Application Architecture Design (continued)
Identify Application components and its associated data flows within the applications environment
-Identify all the components of the application environment, its support infrastructure for its associated data flows, and the associated owners for all components. Usually there are separate teams involved which manage infrastructure equipment, the Windows or UNIX environment that applications or its components reside on or for authentication, or DBA teams to manage databases, for example. While some of these devices are outside the realm of the application it is necessary to understand the environment when troubleshooting application related connectivity issues or errors, security events, and potential downtime. This also goes along way into understanding the applications total security posture when another team is managing certain aspects of the applications environment. For example if a separate database team manages the databases and your applications database resides on the same server as another database instance that is run as a privileged user (such as SA) then you would need to understand this. So for example if there was another application which was vulnerable to SQL injection that called upon this shared database server then it could potentially lead to your applications database being taken over and disclosure of any data within it. An example listing of components in an Applications environment:
- application firewall
- web server
- application server
- middleware servers
- databases
- web caches
- stateful firewalls
- routers
- switches
- Domain Servers
- OTP servers
- RSA secureID servers
- PKI /TACACS/Radius servers-HIDS/HIPS
- NIDS/NIPS
- Syslog and log aggregation servers
- Anti-Virus servers/appliances
- Load Balancers/reverse proxies
- etc
-Please note that this is not an all encompassing list but an example of components that a security architect and an application developer lead would need to be cognizant of. The Application lead would need to know all the components involved in the applications design, network, data flow, interconnections, etc and all the owners for each component. This information will become very useful through the design phase, through the SDLC, and also for security incidents and troubleshooting.
-Additional consideration should be given to the placement and configuration of the network IDS/IPS, and which data flows will be monitored (for example an IPS placed inline at the outside of the DMZ may not be able to monitor web server to database communication?
Source: link
Have a great week and weekend.