24 April 2015


Posts relating to the category tag "monitoring" are listed below.

09 May 2014

AppSensor Guide Part V : Model Dashboards

This post describes what is in Part V of the new OWASP AppSensor Guide v2.0, published on 2nd May.

Photograph of a technician lying on the ground in the middle of a street behind a barrier, working on a display sign

"Part V : Model Dashboards" comprises three shorter chapters:

  • Chapter 27 : Security Event Management Tools
  • Chapter 28 : Application-Specific Dashboards
  • Chapter 29 : Application Vulnerability Tracking.

Part V introduces the necessary concepts for visualising AppSensor data, and presents example application-specific dashboards that have already been created.

Data visualisation of real-time attack detection and response provides organisations with much needed insight into whether their applications are under attack, and by whom.

Previous and a subsequent post describe the other parts of the new guide.

Posted on: 09 May 2014 at 15:19 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

09 May 2014

AppSensor Guide Part IV : Demonstration Implementations

This post describes what is in Part IV of the new OWASP AppSensor Guide v2.0, published on 2nd May.

Photograph of a wooden gate to a field, with a sign 'Beware of Bull' on it, with verdant green grass and coniferous trees behind

"Part IV : Demonstration Implementations" comprising seven chapters, each three pages long, describes model implementations:

  • Chapter 20 : Web Services (AppSensor WS)
  • Chapter 21 : Fully Integrated (AppSensor Core)
  • Chapter 22 : Light Touch Retrofit
  • Chapter 23 : Ensnare for Ruby
  • Chapter 24 : Invocation of AppSensor Code Using Jni4Net
  • Chapter 25 : Using an External Log Management System
  • Chapter 26 : Leveraging a Web Application Firewall.

Part IV provides practical examples of how the AppSensor concept can be deployed, including some standalone components that could be utilised within an organisation's own deployments, or to act as inspiration. The OWASP code portion of the AppSensor Project, that aims to build a reference implementation for concepts conveyed in the guide, is described in chapters 20 and 21.

Each chapter describes the source of the implementation, provides a schematic arrangement, defines which types of detection points and responses are possible, the location of source code, and details of any considerations and related implementations. There is no single implementation method or single best-suited out-the-box solution, since the approach is application-specific.

Previous and subsequent posts describe the other parts of the new guide.

Posted on: 09 May 2014 at 11:57 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

08 May 2014

AppSensor Guide Part III : Making It Happen

This post describes what is in Part III of the new OWASP AppSensor Guide v2.0, published on 2nd May.

Photograph of a construction materials compound surrounded by a wire fence with a red and white reflective warning stripe along it

"Part III : Making It Happen" comprising seven chapters, is the largest part of the guide except for the reference materials:

  • Chapter 13 : Introduction
  • Chapter 14 : Design and Implementation
  • Chapter 15 : Verification, Deployment and Operation
  • Chapter 16 : Advanced Detection Points
  • Chapter 17 : Advanced Thresholds and Responses
  • Chapter 18 : AppSensor and Application Event Logging
  • Chapter 19 : AppSensor and PCI DSS for Ecommerce Merchants.

Part III describes the process of planning, implementing and operating application-specific attack detection and response. The process described is technology agnostic and attempts to be descriptive rather than prescriptive, providing awareness, a description of the problem set, an outline of different approaches at a higher level and some generic approaches.

A description is provided of how to integrate AppSensor concepts into the software development lifecycle (SDLC), and includes mappings to the Open Software Assurance Maturity Model (Open SAMM), the Building Security In Maturity Model (BSIMM), the BITS Financial Services Roundtable Software Assurance Framework, and the Microsoft Security Development Lifecycle (MS SDL). Chapters 17 and 18 provide further information for those wishing to delve deeper into the selection and definition of detection points, attack determination thresholds and responses.

The guide shows how success using AppSensor concepts comes down to many details, and how the process suggested in Part III should be adapted to an organisation's own culture, its working practices and its risks.

Previous and subsequent posts describe the other parts of the new guide.

Posted on: 08 May 2014 at 13:12 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

08 May 2014

AppSensor Guide Part II : Illustrative Case Studies

This post describes what is in Part II of the new OWASP AppSensor Guide v2.0, published on 2nd May.

Photograph of a sign that reads 'Keep Out' in large white letters on a blue background

"Part II : Illustrative Case Studies" comprises eight chapters, each 1-2 pages long:

  • Chapter 5 : Case Study of a Rapidly Deployed Web Application
  • Chapter 6 : Case Study of a Magazine's Mobile App
  • Chapter 7 : Case Study of a Smart Grid Consumer Meter
  • Chapter 8 : Case Study of a Financial Market Trading System
  • Chapter 9 : Case Study of a B2C Ecommerce Website
  • Chapter 10 : Case Study of B2B Web Services
  • Chapter 11 : Case Study of a Document Management System
  • Chapter 12 : Case Study of a Credit Union's Online Banking.

Part II demonstrates how AppSensor can be used for a range of different software application architectures and business risk. They span market sectors, and application types including web sites, web services, mobile apps, critical infrastructure, and client-server.

Each case study demonstrates how business objectives influence the selection of detection points and responses. They show how there is no one single AppSensor solution applicable to all applications and organisations.

Previous and subsequent posts describe the other parts of the new guide.

Posted on: 08 May 2014 at 07:35 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

07 May 2014

AppSensor Guide Part I : AppSensor Overview

This post describes what is in Part I of the new OWASP AppSensor Guide v2.0, published on 2nd May.

Photograph of a sign on some security fencing that reads 'Warning - commit a crime here and you will be forensically tagged'

"Part I : AppSensor Overview" comprises four chapters spanning almost 30 pages of content:

  • Chapter 1 : Application-Specific Attack Detection & Response
  • Chapter 2 : Protection Measures.
  • Chapter 3 : The AppSensor Approach
  • Chapter 4 : Conceptual Elements.

Part I gives a high-level overview of the concept. It also details why it is different to traditional defensive techniques. This is then followed by a description of the general approach towards implementing AppSensor within application software projects.

It describes how the OWASP AppSensor Project defines a conceptual framework, methodology, guidance and example code to implement attack detection and automated responses. It is not a bolt-on tool or code library, but instead offers insight to an approach for organisations to specify or develop their own implementations - specific to their own business, applications, environments and risk profile - building upon existing standard security controls.

Subsequent posts describe the other parts of the new guide.

Posted on: 07 May 2014 at 17:21 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

06 May 2014

AppSensor Guide v2.0 Released

I have been working on writing a new full guide to OWASP AppSensor, the project that describes and explains creating attack-aware software applications with real-time defences. It is now complete.

Banner image for AppSensor Guide v2 using a photograph by Colin Watson of Light Installation by David Press taken at the Kinetica Art Fair 2012, Ambika P3 Gallery, London and overlaid with the words 'AppSensor Guide v2.0 - Application-Specific Real Time Attack Detection and Response'

The new book was announced to the project's mailing lists in a message sent a short time ago. The AppSensor Guide v2.0 is written in English, and is available in three formats:

OWASP AppSensor is free to use and it is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.

The guide was completed entirely on a voluntary basis with no funding. However we are fortunate to have received some funding from the OWASP Project Reboot Initiative to help promote the new book.

I would like to thank co-authors Dennis Groves and John Melton, and the other contributors, editors and reviewers, Ryan Barnett, Michael Coates, Craig Munson and Jay Reynolds. The growth and increased maturity of the project would not have been possible without the other people who contributed feedback, suggestions and ideas to the project, primarily through the mailing list and at OWASP chapter meetings. They are also listed in the book's acknowledgements.

The project has also benefitted greatly from the generous contribution of time and effort by many other volunteers in the OWASP community including those in the OWASP ESAPI project, members of the former OWASP Global Projects Committee, and participants at the AppSensor Summit held during AppSec USA 2011 in Minneapolis.

The foreword, written by project founder and OWASP board chair Michael Coates, is followed by a motivating preamble, written by co-project leader and OWASP co-founder Dennis Groves. The core of the book is arranged into the following parts:

Over the next days I will enumerate further what is in each of these sections.

Posted on: 06 May 2014 at 17:27 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

29 April 2014

Worry When Your SEO is Worse Than Your Attackers

Sometimes fake web sites can be "better" than their originals. As reported this week, the Victoria's Secret web site has been duplicated in a very convincing copy. So good in fact, Google ranks it higher when searching for "Victoria Secret UK".

Screen capture of the google.co.uk search results for 'victoria secret uk' with the first result being the fake website

The possibly fake website is quite convincing, allows payment in five currencies, and cheekily has the "Verisign Secured" and "McAfee Secure" logos on the product pages. If this is a fake site, the motive could be to gather personal data through the registration process, or to steal cardholder data via the payment form using "ZHBPay Payment Gateway", or to sell counterfeit goods. Of course, it might be a valid site of a local agent or reseller, but the product ranges seem different. The conditions of use page is quite poorly written. It's a bit odd.

Screen capture of a catalogue page on the fake website showing the Verisign Secured and McAfee Secure logos

The real primary Victoria's Secret website, aimed at North American customers is:


But there is a real UK-orientated splash page at the .co.uk equivalent domain (whois lookup):


The possibly fake site is (whois lookup):


A quick check on common factors used to improve search engine rankings suggest that the primary .com website has some problems, the fake site has some more. But what differentiates it here is that the fake site is better for the term "uk" than the real splash page.

The victoriassecrettuk.co.uk domain name was registered by an individual:

Nominet whois tool data for victoriassecrettuk.co.uk'

Another site, found searching for "victorias secret uk" gives a site www.thegrapescafebar.co.uk as the first result which redirects to the fake site above.

Screen capture of the google.co.uk search results for 'victoria secrets uk'

What's even more confusing is that the UK customer care email address uses yet another domain (tellvictoria@victoriassecret.uk.com) and www.victoriassecret.uk.com redirects to the UK splash page (on www.victoriassecret.co.uk).

Domain name fail, and search engine optimisation (SEO) fail. Attacker win. The suspicious site is still there, three days after that initial report. I have emailed the real company just in case they are still not aware.

Posted on: 29 April 2014 at 21:39 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

09 April 2014

Third-Party Tracking Cookie Revelations

A new draft paper describes how the capture of tracking cookies can be used for mass surveillance, and where other personal information is leaked by web sites, build up a wider picture of a person's real-world identity.

Title page from 'Cookies that give you away: Evaluating the surveillance implications of web tracking'

Dillon Reisman, Steven Englehardt, Christian Eubank, Peter Zimmerman, and Arvind Narayanan at Princeton University's Department of Computer Science investigated how someone with passive access to a network could glean information from observing HTTP cookies in transit. The authors explain how pseudo-anonymous third-party cookies can be tied together without having to rely on IP addresses.

Then, given personal data leaking over non-SSL content, this can be combined into a larger picture of the person. The paper assessed what personal information is leaked from Alexa Top 50 sites with login support.

This work is likely to attract the attention of privacy advocates and regulators, leading to increased interest in cookies and other tracking mechanisms.

The research work was motivated by two leaked NSA documents.

Posted on: 09 April 2014 at 10:02 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

04 April 2014

Regulation of Software with a Medical Purpose

I seem to have a series of regulation-related posts at the moment. Perhaps the time of year. An article on OutLaw.com discusses how mobile apps and other software medical purpose may be subject to regulation.

Photograph of shelves in a shop displaying rows of medications

The UK's Medicines and Healthcare Products Regulations Agency (MHRA) is responsible for regulating all medicines and medical devices in the UK by ensuring they work and are acceptably safe. It has issued new guidance on "medical device stand-alone software (including apps)" which is defined as "software which has a medical purpose which at the time of it being placed onto the market is not incorporated into a medical device". Thus "software... intended by the manufacturer to be used for human beings for the purpose of:

  • diagnosis, prevention, monitoring, treatment or alleviation of disease,
  • diagnosis, monitoring, treatment, alleviation of or compensation for an injury or handicap,
  • investigation, replacement or modification of the anatomy or of a physiological process,
  • control of conception..."

Guidance on Medical Device Stand-alone Software (Including Apps) describes the scope, requirements and software-specific considerations. Product liability and safety considerations are also mentioned.

This introduces the potential need for registration, documentation, self-assessment, validation, monitoring and incident reporting, especially if the software performs any form of diagnosis or assessment. The OutLaw.com article provides a good analysis and views from experts.

Posted on: 04 April 2014 at 10:11 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

07 March 2014

PCIDSS SAQ A-EP and SAQ A: Comparison of Questions

PCIDSS SAQ A-EP and SAQ A are very different in PCIDSS version 3.0, and there are some minor changes between SAQ A versions 2.0 and 3.0.

SAQ A-EP has been developed to address requirements applicable to e-commerce merchants with a website(s) that does not itself receive cardholder data but which does affect the security of the payment transaction and/or the integrity of the page that accepts the consumer's cardholder data.

In the table below, "Y" indicates the question is included in the SAQ. The question text is taken from PCIDSS v3.0, and there are some numbering differences with version 2.0 under requirement 9. There are an order of magnitude more questions on the Self-Assessment Questionnaire (SAQ) for "Partially Outsourced E-commerce Merchants Using a Third-Party Website for Payment Processing" (SAQ-EP).

See my previous post for information about the SAQ A-EP eligibility criteria for e-commerce merchants and another post providing an introduction to the change.

Do all these questions apply to your own web site/e-commerce environment? The only answer to this is what your acquirer or payment brand requires of you, in your region (e.g. Europe). It is possibly the case that ecommerce-only merchants with fewer transactions (such as levels 3 and 4), might be asked to use the an acquirer's risk-based approach or only certain milestones in the PCIDSS prioritised approach.

And of course some questions may relate to PCIDSS requirements that are deemed not applicable to your environment, when the "N/A" option is then selected and the "Explanation of Non-Applicability" worksheet in Appendix C of SAQ A-EP is completed foreach "N/A" entry.

And to limit the PCIDSS scope, segmentation will be required to isolate the relevant e-commerce systems from other system components (see eligibility criteria), preferably also isolating as much of the non e-commerce aspects of the website. However, most of the designated PCIDSS requirements ought to be in place for security reasons anyway? Hopefully.

PCIDSS Self-Assessment Questionnaire (SAQ) Question v2.0 v3.0
1.1.4 (a) Is a firewall required and implemented at each Internet connection and between any demilitarized zone (DMZ) and the internal network zone? Y
(b) Is the current network diagram consistent with the firewall configuration standards? Y
1.1.6 (a) Do firewall and router configuration standards include a documented list of services, protocols, and ports, including business justification (for example, hypertext transfer protocol (HTTP), Secure Sockets Layer (SSL), Secure Shell (SSH), and Virtual Private Network (VPN) protocols)? Y
(b) Are all insecure services, protocols, and ports identified, and are security features documented and implemented for each identified service? Y
1.2 Do firewall and router configurations restrict connections between untrusted networks and any system in the cardholder data environment as follows:
Note: An "untrusted network" is any network that is external to the networks belonging to the entity under review, and/or which is out of the entity's ability to control or manage.
1.2.1 (a) Is inbound and outbound traffic restricted to that which is necessary for the cardholder data environment? Y
(b) Is all other inbound and outbound traffic specifically denied (for example by using an explicit "deny all" or an implicit deny after allow statement)? Y
1.3.4 Are anti-spoofing measures implemented to detect and block forged sourced IP addresses from entering the network? (For example, block traffic originating from the internet with an internal address) Y
1.3.5 Is outbound traffic from the cardholder data environment to the Internet explicitly authorized? Y
1.3.6 Is stateful inspection, also known as dynamic packet filtering, implemented--that is, only established connections are allowed into the network? Y
1.3.8 (a) Are methods in place to prevent the disclosure of private IP addresses and routing information to the Internet?
Note: Methods to obscure IP addressing may include, but are not limited to:
* Network Address Translation (NAT) * Placing servers containing cardholder data behind proxy servers/firewalls,
* Removal or filtering of route advertisements for private networks that employ registered addressing, Internal use of RFC1918 address space instead of registered addresses.
(b) Is any disclosure of private IP addresses and routing information to external entities authorized? Y
2.1 (a) Are vendor-supplied defaults always changed before installing a system on the network?
This applies to ALL default passwords, including but not limited to those used by operating systems, software that provides security services, application and system accounts, point-of-sale (POS) terminals, Simple Network Management Protocol (SNMP) community strings, etc.).
(b) Are unnecessary default accounts removed or disabled before installing a system on the network? Y
2.2 (a) Are configuration standards developed for all system components and are they consistent with industry-accepted system hardening standards?
Sources of industry-accepted system hardening standards may include, but are not limited to, SysAdmin Audit Network Security (SANS) Institute, National Institute of Standards Technology (NIST), International Organization for Standardization (ISO), and Center for Internet Security (CIS).
(b) Are system configuration standards updated as new vulnerability issues are identified, as defined in Requirement 6.1? Y
(c) Are system configuration standards applied when new systems are configured? Y
(d) Do system configuration standards include all of the following:
* Changing of all vendor-supplied defaults and elimination of unnecessary default accounts?
* Implementing only one primary function per server to prevent functions that require different security levels from co-existing on the same server?
* Enabling only necessary services, protocols, daemons, etc., as required for the function of the system?
* Implementing additional security features for any required services, protocols or daemons that are considered to be insecure?
* Configuring system security parameters to prevent misuse?
* Removing all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers?
2.2.1 (a) Is only one primary function implemented per server, to prevent functions that require different security levels from co-existing on the same server?
For example, web servers, database servers, and DNS should be implemented on separate servers.
(b) If virtualization technologies are used, is only one primary function implemented per virtual system component or device? Y
2.2.2 (a) Are only necessary services, protocols, daemons, etc. enabled as required for the function of the system (services and protocols not directly needed to perform the device's specified function are disabled)? Y
(b) Are all enabled insecure services, daemons, or protocols justified per documented configuration standards? Y
2.2.3 Are additional security features documented and implemented for any required services, protocols or daemons that are considered to be insecure?
For example, use secured technologies such as SSH, S-FTP, SSL or IPSec VPN to protect insecure services such as NetBIOS, file-sharing, Telnet, FTP, etc.
2.2.4 (a) Are system administrators and/or personnel that configure system components knowledgeable about common security parameter settings for those system components? Y
(b) Are common system security parameters settings included in the system configuration standards? Y
(c) Are security parameter settings set appropriately on system components? Y
2.2.5 (a) Has all unnecessary functionality--such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers--been removed? Y
(b) Are enabled functions documented and do they support secure configuration? Y
(c) Is only documented functionality present on system components? Y
2.3 Is non-console administrative access encrypted as follows: Use technologies such as SSH, VPN, or SSL/TLS for web-based management and other non-console administrative access.
(a) Is all non-console administrative access encrypted with strong cryptography, and is a strong encryption method invoked before the administrator's password is requested? Y
(b) Are system services and parameter files configured to prevent the use of Telnet and other insecure remote login commands? Y
(c) Is administrator access to web-based management interfaces encrypted with strong cryptography? Y
(d) For the technology in use, is strong cryptography implemented according to industry best practice and/or vendor recommendations? Y
3.2 (c) ) Is sensitive authentication data deleted or rendered unrecoverable upon completion of the authorization process? Y
(d) Do all systems adhere to the following requirements regarding non-storage of sensitive authentication data after authorization (even if encrypted): Y
3.2.2 The card verification code or value (three-digit or four-digit number printed on the front or back of a payment card) is not stored after authorisation Y
3.2.3 The personal identification number (PIN) or the encrypted PIN block is not stored after authorization? Y
4.1 (a) Are strong cryptography and security protocols, such as SSL/TLS, SSH or IPSEC, used to safeguard sensitive cardholder data during transmission over open, public networks?
Examples of open, public networks include but are not limited to the Internet; wireless technologies, including 802.11 and Bluetooth; cellular technologies, for example, Global System for Mobile communications (GSM), Code division multiple access (CDMA); and General Packet Radio Service (GPRS).
(b) ) Are only trusted keys and/or certificates accepted? Y
(c) Are security protocols implemented to use only secure configurations, and to not support insecure versions or configurations? Y
(d) Is the proper encryption strength implemented for the encryption methodology in use (check vendor recommendations/best practices)? Y
(e) For SSL/TLS implementations, is SSL/TLS enabled whenever cardholder data is transmitted or received?
For example, for browser-based implementations: * "HTTPS" appears as the browser Universal Record Locator (URL) protocol, and
* Cardholder data is only requested if "HTTPS" appears as part of the URL.
4.2 (b) Are policies in place that state that unprotected PANs are not to be sent via end-user messaging technologies? Y
5.1 Is anti-virus software deployed on all systems commonly affected by malicious software? Y
5.1.1 Are anti-virus programs capable of detecting, removing, and protecting against all known types of malicious software (for example, viruses, Trojans, worms, spyware, adware, and rootkits)? Y
5.1.2 Are periodic evaluations performed to identify and evaluate evolving malware threats in order to confirm whether those systems considered to not be commonly affected by malicious software continue as such? Y
5.2 Are all anti-virus mechanisms maintained as follows:
(a) Are all anti-virus software and definitions kept current? Y
(b) Are automatic updates and periodic scans enabled and being performed? Y
(c) Are all anti-virus mechanisms generating audit logs, and are logs retained in accordance with PCI DSS Requirement 10.7? Y
5.3 Are all anti-virus mechanisms:
* Actively running?
* Unable to be disabled or altered by users?
Note: Anti-virus solutions may be temporarily disabled only if there is legitimate technical need, as authorized by management on a case-by-case basis. If anti-virus protection needs to be disabled for a specific purpose, it must be formally authorized. Additional security measures may also need to be implemented for the period of time during which anti-virus protection is not active.
6.1 Is there a process to identify security vulnerabilities, including the following:
* Using reputable outside sources for vulnerability information?
* Assigning a risk ranking to vulnerabilities that includes identification of all "high" risk and "critical" vulnerabilities?
Note: Risk rankings should be based on industry best practices as well as consideration of potential impact. For example, criteria for ranking vulnerabilities may include consideration of the CVSS base score and/or the classification by the vendor, and/or type of systems affected.
Methods for evaluating vulnerabilities and assigning risk ratings will vary based on an organization's environment and risk assessment strategy. Risk rankings should, at a minimum, identify all vulnerabilities considered to be a "high risk" to the environment. In addition to the risk ranking, vulnerabilities may be considered "critical" if they pose an imminent threat to the environment, impact critical systems, and/or would result in a potential compromise if not addressed. Examples of critical systems may include security systems, public-facing devices and systems, databases, and other systems
6.2 (a) Are all system components and software protected from known vulnerabilities by installing applicable vendor-supplied security patches? Y
(b) Are critical security patches installed within one month of release?
Note: Critical security patches should be identified according to the risk ranking process defined in Requirement 6.1.
6.4.5 (a) Are change-control procedures for implementing security patches and software modifications documented and require the following?
* Documentation of impact
* Documented change control approval by authorized parties
* Functionality testing to verify that the change does not adversely impact the security of the system
* Back-out procedures
(b) Are the following performed and documented for all changes: Y Documentation of impact? Y Documented approval by authorized parties? Y (a) Functionality testing to verify that the change does not adversely impact the security of the system? Y
(b) For custom code changes, testing of updates for compliance with PCI DSS Requirement 6.5 before being deployed into production? Y Back-out procedures? Y
6.5 (c) Are applications developed based on secure coding guidelines to protect applications from, at a minimum, the following vulnerabilities:
6.5.1 Do coding techniques address injection flaws, particularly SQL injection?
Note: Also consider OS Command Injection, LDAP and XPath injection flaws as well as other injection flaws.
6.5.2 Do coding techniques address buffer overflow vulnerabilities? Y
For web applications and application interfaces (internal or external), are applications developed based on secure coding guidelines to protect applications from the following additional vulnerabilities:
6.5.7 Do coding techniques address cross-site scripting (XSS) vulnerabilities? Y
6.5.8 Do coding techniques address improper access control such as insecure direct object references, failure to restrict URL access, directory traversal, and failure to restrict user access to functions? Y
6.5.9 Do coding techniques address cross-site request forgery (CSRF)? Y
6.5.10 Do coding techniques address broken authentication and session management?
Note: Requirement 6.5.10 is a best practice until June 30, 2015, after which it becomes a requirement.
6.6 For public-facing web applications, are new threats and vulnerabilities addressed on an ongoing basis, and are these applications protected against known attacks by applying either of the following methods?
* Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, as follows:
- At least annually
- After any changes
- By an organization that specializes in application security
- That all vulnerabilities are corrected
- That the application is re-evaluated after the corrections
Note: This assessment is not the same as the vulnerability scans performed for Requirement 11.2.
- OR -
* Installing an automated technical solution that detects and prevents web-based attacks (for example, a web-application firewall) in front of public-facing web applications to continually check all traffic.
7.1 Is access to system components and cardholder data limited to only those individuals whose jobs require such access, as follows:
7.1.2 Is access to privileged user IDs restricted as follows: * To least privileges necessary to perform job
* Assigned only to roles that specifically require that privileged access?
7.1.3 Are access assigned based on individual personnel's job classification and function? Y
8.1.1 1.1 Are all users assigned a unique ID before allowing them to access system components or cardholder data? Y
8.1.3 Is access for any terminated users immediately deactivated or removed? Y
8.1.5 (a) Are accounts used by vendors to access, support, or maintain system components via remote access enabled only during the time period needed and disabled when not in use? Y
(b) Are vendor remote access accounts monitored when in use? Y
8.1.6 (a) Are repeated access attempts limited by locking out the user ID after no more than six attempts? Y
8.1.7 Once a user account is locked out, is the lockout duration set to a minimum of 30 minutes or until an administrator enables the user ID? Y
8.2 In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all
* Something you know, such as a password or passphrase
* Something you have, such as a token device or smart card
* Something you are, such as a biometric
8.2.1 (a) Is strong cryptography used to render all authentication credentials (such as passwords/phrases) unreadable during transmission and storage on all system components? Y
8.2.3 (a) Are user password parameters configured to require passwords/passphrases meet the following?
* A minimum password length of at least seven characters
* Contain both numeric and alphabetic characters
Alternatively, the passwords/phrases must have complexity and strength at least equivalent to the parameters specified above.
8.2.4 (a) Are user passwords/passphrases changed at least every 90 days? Y
8.2.5 (a) Must an individual submit a new password/phrase that is different from any of the last four passwords/phrases he or she has used? Y
8.2.6 Are passwords/phrases set to a unique value for each user for first-time use and upon reset, and must each user change their password immediately after the first use? Y
8.3 Is two-factor authentication incorporated for remote network access originating from outside the network by personnel (including users and administrators) and all third parties (including vendor access for support or maintenance)?
Note: Two-factor authentication requires that two of the three authentication methods (see PCI DSS Requirement 8.2 for descriptions of authentication methods) be used for authentication. Using one factor twice (for example, using two separate passwords) is not considered two-factor authentication.
Examples of two-factor technologies include remote authentication and dial-in service (RADIUS) with tokens; terminal access controller access control system (TACACS) with tokens; and other technologies that facilitate two-factor authentication.
8.5 Are group, shared, or generic accounts, passwords, or other authentication methods prohibited as follows:
* Generic user IDs and accounts are disabled or removed;
* Shared user IDs for system administration activities and other critical functions do not exist; and
* Shared and generic user IDs are not used to administer any system components?
8.6 Where other authentication mechanisms are used (for example, physical or logical security tokens, smart cards, and certificates, etc.), is the use of these mechanisms assigned as follows?
* Authentication mechanisms must be assigned to an individual account and not shared among multiple accounts
* Physical and/or logical controls must be in place to ensure only the intended account can use that mechanism to gain access
9.1 Are appropriate facility entry controls in place to limit and monitor physical access to systems in the cardholder data environment? Y
9.5 Are all media physically secured (including but not limited to computers, removable electronic media, paper receipts, paper reports, and faxes)?
For purposes of Requirement 9, "media" refers to all paper and electronic media containing cardholder data.
Y (9.6) Y Y
9.6 (a) Is strict control maintained over the internal or external distribution of any kind of media? Y (9.7) Y Y
(b) Do controls include the following:
9.6.1 Is media classified so the sensitivity of the data can be determined? Y (9.7.1) Y Y
9.6.2 Is media sent by secured courier or other delivery method that can be accurately tracked? Y (9.7.2) Y Y
9.6.3 Is management approval obtained prior to moving the media (especially when media is distributed to individuals)? Y (9.8) Y Y
9.7 Is strict control maintained over the storage and accessibility of media? Y (9.9) Y Y
9.8 (a) Is all media destroyed when it is no longer needed for business or legal reasons? Y (9.10) Y Y
(c) Is media destruction performed as follows:
9.8.1 (a) Are hardcopy materials cross-cut shredded, incinerated, or pulped so that cardholder data cannot be reconstructed? Y (9.10.1a) Y Y
(b) Are storage containers used for materials that contain information to be destroyed secured to prevent access to the contents? Y (9.10.1b) Y Y
10.2 Are automated audit trails implemented for all system components to reconstruct the following events:
10.2.2 All actions taken by any individual with root or administrative privileges? Y
10.2.4 Invalid logical access attempts? Y
10.2.5 Use of and changes to identification and authentication mechanisms–including but not limited to creation of new accounts and elevation of privileges – and all changes, additions, or deletions to accounts with root or administrative privileges? Y
10.3 Are the following audit trail entries recorded for all system components for each event:
10.3.1 User identification? Y
10.3.2 Type of event? Y
10.3.3 Date and time? Y
10.3.4 Success or failure indication? Y
10.3.5 Origination of event? Y
10.3.6 Identity or name of affected data, system component, or resource? Y
10.5.4 Are logs for external-facing technologies (for example, wireless, firewalls, DNS, mail) written onto a secure, centralized, internal log server or media? Y
10.6 Are logs and security events for all system components reviewed to identify anomalies or suspicious activity as follows?
Note: Log harvesting, parsing, and alerting tools may be used to achieve compliance with Requirement 10.6.
10.6.1 (b) Are the following logs and security events reviewed at least daily, either manually or via log tools?
* All security events
* Logs of all system components that store process, or transmit CHD and/or SAD, or that could impact the security of CHD and/or SAD
* Logs of all critical system components
* Logs of all servers and system components that perform security functions (for example, firewalls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers, e-commerce redirection servers, etc
10.6.2 (b) Are logs of all other system components periodically reviewed--either manually or via log tools--based on the organization's policies and risk management strategy? Y
10.6.3 (b) Is follow up to exceptions and anomalies identified during the review process performed? Y
10.7 (b) Are audit logs retained for at least one year? Y
(c) Are at least the last three months' logs immediately available for analysis? Y
11.2.2 (a) Are quarterly external vulnerability scans performed?
Note: Quarterly external vulnerability scans must be performed by an Approved Scanning Vendor (ASV), approved by the Payment Card Industry Security Standards Council (PCI SSC).
Refer to the ASV Program Guide published on the PCI SSC website for scan customer responsibilities, scan preparation, etc.
(b) Do external quarterly scan and rescan results satisfy the ASV Program Guide requirements for a passing scan (for example, no vulnerabilities rated 4.0 or higher by the CVSS, and no automatic failures)? Y
(c) Are quarterly external vulnerability scans performed by a PCI SSC Approved Scanning Vendor (ASV[)]? Y
11.2.3 (a) Are internal and external scans, and rescans as needed, performed after any significant change?
Note: Scans must be performed by qualified personnel.
(b) Does the scan process include rescans until: * For external scans, no vulnerabilities exist that
are scored 4.0 or higher by the CVSS;
* For internal scans, a passing result is obtained or all "high-risk" vulnerabilities as defined in PCI DSS Requirement 6.1 are resolved?
(c) Are scans performed by a qualified internal resource(s) or qualified external third party, and if applicable, does organizational independence of the tester exist (not required to be a QSA or ASV)? Y
11.3 Does the penetration-testing methodology include the following?
* Is based on industry-accepted penetration testing approaches (for example, NIST SP800-115)
* Includes coverage for the entire CDE perimeter and critical systems
* Includes testing from both inside and outside the network
* Includes testing to validate any segmentation and scope-reduction controls
* Defines application-layer penetration tests to include, at a minimum, the vulnerabilities listed in Requirement 6.5
* Defines network-layer penetration tests to include components that support network functions as well as operating systems
* Includes review and consideration of threats and vulnerabilities experienced in the last 12 months
* Specifies retention of penetration testing results and remediation activities results
11.3.1 (a) Is external penetration testing performed per the defined methodology, at least annually, and after any significant infrastructure or application changes to the environment (such as an operating system upgrade, a sub-network added to the environment, or an added web server)? Y
(b) Are tests performed by a qualified internal resource or qualified external third party, and if applicable, does organizational independence of the tester exist (not required to be a QSA or ASV)? Y
11.3.3 Are exploitable vulnerabilities found during penetration testing corrected, followed by repeated testing to verify the corrections? Y
11.3.4 (a) [If segmentation is used to isolate the CDE from other networks:] Are penetration-testing procedures defined to test all segmentation methods, to confirm they are operational and effective, and isolate all out-of-scope systems from in-scope systems? Y
(b) Does penetration testing to verify segmentation controls meet the following?
* Performed at least annually and after any changes to segmentation controls/methods
* Covers all segmentation controls/methods in use
* Verifies that segmentation methods are operational and effective, and isolate all out-of-scope systems from in-scope systems.
11.5 (a) Is a change-detection mechanism (for example, file integrity monitoring tools) deployed within the cardholder data environment to detect unauthorized modification of critical system files, configuration files, or content files?
Examples of files that should be monitored include: * System executables
* Application executables
* Configuration and parameter files
* Centrally stored, historical or archived, log, and audit files
* Additional critical files determined by entity (for example, through risk assessment or other means)
(b) Is the change-detection mechanism configured to alert personnel to unauthorized modification of critical system files, configuration files or content files, and do the tools perform critical file comparisons at least weekly?
Note: For change detection purposes, critical files are usually those that do not regularly change, but the modification of which could indicate a system compromise or risk of compromise. Change detection mechanisms such as file-integrity monitoring products usually come pre-configured with critical files for the related operating system. Other critical files, such as those for custom applications, must be evaluated and defined by the entity (that is the merchant or service provider).
11.5.1 Is a process in place to respond to any alerts generated by the change-detection solution? Y
12.1 Is a security policy established, published, maintained, and disseminated to all relevant personnel? Y
12.1.1 Is the security policy reviewed at least annually and updated when the environment changes? Y
12.4 Do security policy and procedures clearly define information security responsibilities for all personnel? Y
12.5 (b) Are the following information security management responsibilities formally assigned to an individual or team:
12.5.3 Establishing, documenting, and distributing security incident response and escalation procedures to ensure timely and effective handling of all situations? Y
12.6 (a) Is a formal security awareness program in place to make all personnel aware of the importance of cardholder data security? Y
12.8 Are policies and procedures maintained and implemented to manage service providers with whom cardholder data is shared, or that could affect the security of cardholder data, as follows:
12.8.1 Is a list of service providers maintained? Y Y Y
12.8.2 Is a written agreement maintained that includes an acknowledgement that the service providers are responsible for the security of cardholder data the service providers possess or otherwise store, process, or transmit on behalf of the customer, or to the extent that they could impact the security of the customer's cardholder data environment?
Note: The exact wording of an acknowledgement will depend on the agreement between the two parties, the details of the service being provided, and the responsibilities assigned to each party. The acknowledgement does not have to include the exact wording provided in this requirement.
12.8.3 Is there an established process for engaging service providers, including proper due diligence prior to engagement? Y Y Y
12.8.4 Is a program maintained to monitor service providers' PCI DSS compliance status at least annually? Y Y Y
12.8.5 Is information maintained about which PCI DSS requirements are managed by each service provider, and which are managed by the entity? Y Y
12.10.1 (a) Has an incident response plan been created to be implemented in the event of system breach? Y
(b) Does the plan address the following, at a minimum:
* Roles, responsibilities, and communication and contact strategies in the event of a compromise including notification of the payment brands, at a minimum? Y
* Specific incident response procedures? Y
* Business recovery and continuity procedures? Y
* Data backup processes? Y
* Analysis of legal requirements for reporting compromises? Y
* Coverage and responses of all critical system components? Y
* Reference or inclusion of incident response procedures from the payment brands? Y
Total Number of Questions 13 14 139

Well, sorry this page is so long. When I began I thought it was a useful idea, and once started I wanted to complete the list. It is useful to me if no-one else.

Posted on: 07 March 2014 at 15:24 hrs

Comments Comments (2) | Permalink | Send Send | Post to Twitter

More Entries

Monitoring : Application Security and Privacy
ISO/IEC 18004:2006 QR code for https://clerkendweller.uk

Requested by on Tuesday, 28 April 2015 at 15:28 hrs (London date/time)

Please read our terms of use and obtain professional advice before undertaking any actions based on the opinions, suggestions and generic guidance presented here. Your organisation's situation will be unique and all practices and controls need to be assessed with consideration of your own business context.

Terms of use https://www.clerkendweller.uk/page/terms
Privacy statement https://www.clerkendweller.uk/page/privacy
© 2008-2015 clerkendweller.uk