The OWASP Top 10 Web Application Security Risks for 2010 are:
- A1: Injection
- A2: Cross-Site Scripting (XSS)
- A3: Broken Authentication and Session Management
- A4: Insecure Direct Object References
- A5: Cross-Site Request Forgery (CSRF)
- A6: Security Misconfiguration
- A7: Insecure Cryptographic Storage
- A8: Failure to Restrict URL Access
- A9: Insufficient Transport Layer Protection
- A10: Unvalidated Redirects and Forwards
Please help us make sure every developer in the ENTIRE WORLD knows about the OWASP Top 10 by helping to spread the world!!!
As you help us spread the word, please emphasize:
- OWASP is reaching out to developers, not just the application security community
- The Top 10 is about managing risk, not just avoiding vulnerabilities
- To manage these risks, organizations need an application risk management program, not just awareness training, app testing, and remediation
We need to encourage organizations to get off the penetrate and patch mentality. As Jeff Williams said in his 2009 OWASP AppSec DC Keynote: “we’ll never hack our way secure – it’s going to take a culture change” for organizations to properly address application security.
At Google, we've gathered hard data to reinforce our intuition that "speed matters" on the Internet. Google runs experiments on the search results page to understand and improve the search experience. Recently, we conducted some experiments to determine how users react when web search takes longer. We've always viewed speed as a competitive advantage, so this research is important to understand the trade-off between speed and other features we might introduce. We wanted to share this information with the public because we hope it will give others greater insight into how important speed can be.Speed as perceived by the end user is driven by multiple factors, including how fast results are returned and how long it takes a browser to display the content. Our experiments injected server-side delay to model one of these factors: extending the processing time before and during the time that the results are transmitted to the browser. In other words, we purposefully slowed the delivery of search results to our users to see how they might respond.All other things being equal, more usage, as measured by number of searches, reflects more satisfied users. Our experiments demonstrate that slowing down the search results page by 100 to 400 milliseconds has a measurable impact on the number of searches per user of -0.2% to -0.6% (averaged over four or six weeks depending on the experiment). That's 0.2% to 0.6% fewer searches for changes under half a second!Furthermore, users do fewer and fewer searches the longer they are exposed to the experiment. Users exposed to a 200 ms delay since the beginning of the experiment did 0.22% fewer searches during the first three weeks, but 0.36% fewer searches during the second three weeks. Similarly, users exposed to a 400 ms delay since the beginning of the experiment did 0.44% fewer searches during the first three weeks, but 0.76% fewer searches during the second three weeks. Even if the page returns to the faster state, users who saw the longer delay take time to return to their previous usage level. Users exposed to the 400 ms delay for six weeks did 0.21% fewer searches on average during the five week period after we stopped injecting the delay.While these numbers may seem small, a daily impact of 0.5% is of real consequence at the scale of Google web search, or indeed at the scale of most Internet sites. Because the cost of slower performance increases over time and persists, we encourage site designers to think twice about adding a feature that hurts performance if the benefit of the feature is unproven.
I am sorry for the long silence, but during this month I am very busy :(
Today I just wanna suggest this reading on Kernel Exploitation
We can demonstrate the first fact with the following program, which writes to the null_read file to force a kernel NULL dereference, but with the NULL page mapped, so that nothing goes wrong:
Writing to that file will trigger a NULL pointer dereference by the nullderef kernel module, but because it runs in the same address space as the user process, the read proceeds fine and nothing goes wrong – no kernel oops. We’ve passed the first step to a working exploit.
finally Safari decided to upgrade its webkit adding sandboxes and process separations. Here the official high level documentation
Take a look, it seems to very promising !
Today, digital government (DG) research is being conducted all over the world. Most of this work is focused within the geographic and political contexts of individual countries. However, given the growing influence of global economic, social, technical, and political forces, the questions embedded in digital government research are now expanding to international dimensions that focus on topics that cross the jurisdictions, cultures, or customs of different countries.
The iGov Research Institute, a week-long residential program, provides doctoral students from around the world an opportunity to assess the impact of information and communication technologies in the public sector and to understand the value of doing research in an international context. Developed by the Center for Technology in Government, under the sponsorship of the U.S. National Science Foundation, the Institute takes advantage of the experiences of a major city or region as an integral part of the program. It includes field visits and discussions with innovative government leaders, as well as academic sessions.
- C:\Documents and Settings\Administrator\Application Data\Microsoft\Crypto\RSA\S-1-5-21-1078081533-1677128483-1801674531-500\699c4b9cdebca7aaea5193cae8a50098_5fc4e98d-1101-4864-b0bf-e0b3f6d9d878