Designers of web sites and web application systems have many places where potential attacks could be detected. But where should attacks be stopped?
There is of course the problem of identifying what is exactly an attack—is it a very malicious payload, a drive-by attack, reconnaissance for something worse to come, a search engine crawler or the submission of a typographical error by a valid user? Sometimes you may not know without correlating data from different sources over a period of time. Having some sort of centralised logging, correlating and alerting system is required.
Take for example a page address (URL) that is only meant to be requested by a particular authorised source (user or system). This might be something like a simple form for suppliers to collect orders, an uptime monitoring target, a data import routine or data exchange facility. Let's assume the application script processing this URL validates the request format, the argument names and values, and source (perhaps by some combination of username, password, token, IP address and certificate). But what should be done if the request comes from an unauthorised source or fails one of the other validation checks?
Such requests could be restricted by a router, network firewall, web application firewall or even restrictions configured on the web server itself. It may seem most appropriate to add rules to the network firewall. In a simple web hosting architecture, this might look like this:
where the incoming invalid request (potential attack) is shown as a dark orange arrow. Of course, your architecture may include other types of application firewalls, load balancing and clustered servers in a 3-tier arrangement:
Even this more complex architecture doesn't show other servers and network devices that may exist such as network storage, authentication servers, intrusion detection systems (IDS), backup devices, the internal network, failover/standby systems, etc. But unless these systems log appropriately and these logs are incorporated into real-time alerting and monitoring systems, we might be none the wiser that someone (an "attacker") is having a go.
In many cases it may be better to let the request pass through the intermediate control points so that application itself can assess the request and, probably reject the request. But this means the application can record and report on this, and possibly use this information to combine with other anomalous requests.
Requests that could harm the web or application servers should of course be restricted earlier than this. But in the example of source constraints, it is better to ensure as much information about the source and content of the request is captured. A network firewall is not the best device for this.
The method implemented will depend upon what logging and correlating facilities are in place. If you have a central analysis server such as a Security Information and Event Management (SIEM), this may also be an option for controlling the blocking action with a number of possible prevention points.
The other problem with moving security controls away from the application is that there are potentially more configuration settings to implement, and the application itself may not be able to detect if these are in place. If the application has knowledge of attacks (by its own monitoring or via a SIEM), it may be able to be more strict with subsequent requests from the same source, or adjust its general security posture to a more defensive condition.