27 August 2014

IP addresses

Posts relating to the category tag "IP addresses" are listed below.

27 August 2014

ISP-Enforced Web Filtering Controls

UK regulator OFCOM has published a report on network level filtering measures for internet service providers, also referred to as "family-friendly network-level filtering".

Photograph of children playing with Chris Milk's a large-scale interactive triptych The Treachery of Sanctuary at the Barbican 'Digital Revolution' exhibition in London; further details at http://milk.co/treachery

Ofcom's Report on Internet Safety Measures - Internet Service Providers: Network Level Filtering Measures describes the current state of filtering put in place by the UK's four largest fixed line internet service providers (ISPs) - BT, Sky, TalkTalk and Virgin Media.

The report discusses:

  • Scope of controls (devices, categories and filter settings)
  • Implementation (timescales, customer prompts and take-up)
  • Account holder verification processes
  • Technical approaches and circumvention of the filters.

Filtering categories are not consistent across the four ISPs but include Alcohol, Crime, violence and hate, Dating, rugs, File Sharing, Gambling, %uFFCGames, Hacking, Nudity, Pornography, Sexual education, Social networking, Suicide and self-harm, Tobacco. Blocking is primarily based on domain and hostname, but can also use URL path (apart from Sky).

Take up (see 3.24 in the report), ranges from 4% (Virgin Media) and 5% (BT), through 8% (Sky) to 36% (TalkTalk). TalkTalk forces users to opt out rather than opt in.

Posted on: 27 August 2014 at 09:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

09 April 2014

Third-Party Tracking Cookie Revelations

A new draft paper describes how the capture of tracking cookies can be used for mass surveillance, and where other personal information is leaked by web sites, build up a wider picture of a person's real-world identity.

Title page from 'Cookies that give you away: Evaluating the surveillance implications of web tracking'

Dillon Reisman, Steven Englehardt, Christian Eubank, Peter Zimmerman, and Arvind Narayanan at Princeton University's Department of Computer Science investigated how someone with passive access to a network could glean information from observing HTTP cookies in transit. The authors explain how pseudo-anonymous third-party cookies can be tied together without having to rely on IP addresses.

Then, given personal data leaking over non-SSL content, this can be combined into a larger picture of the person. The paper assessed what personal information is leaked from Alexa Top 50 sites with login support.

This work is likely to attract the attention of privacy advocates and regulators, leading to increased interest in cookies and other tracking mechanisms.

The research work was motivated by two leaked NSA documents.

Posted on: 09 April 2014 at 10:02 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

10 May 2013

IP Address Sharing and Individual Identification

BT has announced a trial of its Carrier-Grade Network Address Translation (CGNAT) where Internet Protocol (IP) addresses will be shared between subscribers.

organisations [will] generally have to treat IP addresses as personal data

Concerns have been expressed about the ability for some application to work if they rely on the assumption that IP addresses are unique, and also how this affects the identification of individual people.

Out-law.com provides a good review of the issues and information from BT, but links to the sources are not provided. BT has apparently stated they will still be able to identify individuals despite using CGNAT.

But the issue of identification does not only relate to newsworthy "illegal online activity" but also for wider privacy protection of completely legal activity where it is clear that IP addresses really must be considered as personal identifiers, especially when they can be combined with other data sets. Something to be considered in privacy impact assessments.

Posted on: 10 May 2013 at 09:48 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

17 December 2010

User Tracking Opt Out

Online behavioral advertising continues to be in the news (see also previous here, here and here). The US Federal Trade Commission (FTC) has now issued a preliminary staff report which has been widely discussed (including here, here, here and here.)

The FTC's work was prompted by the lack of success with "notice-and-choice" and "harm-based" models to provide adequate and meaningful consumer protection, despite various industry initiatives such as self-regulation. The FTC report proposes a new framework and suggests organisations should adopt a privacy-by-design approach in their information systems and business processes, provide greater clarity in explaining their use of personal data and, for practices that are not "commonly accepted", provide information so that people can make informed and meaningful choices. One example of this, and the aspect which seems to have the most attention, is the ability for consumers to make a universal choice whether they allow their data to be used in behavioural advertising i.e. have the ability not to be tracked. And the FTC suggests organisations need to make their actions more transparent, and also help in the effort to educate consumers.

Note, the FTC document refers to "consumers" due to their area of responsibility, but the concepts could/should also be relevant to other personal data (e.g. employees, citizens).

But what does "no tracking" mean? There are already some initiatives in this area (e.g. the Network Advertising Initiative (NAI) Opt-out Tool and the Interactive Advertising Bureau (IAB) Self-Regulatory Program for Online Behavioral Advertising). The FTC has testified that the mechanism should be browser based allowing consumers to opt out easily and permanently. There has been debate about how "Do Not Track" might be achieved online, and a suggestion is organisations honour a new HTTP header. The "X-Do-Not-Track" header would be sent if the user (consumer?) had set this as their preference (or not deselected it?). See http://donottrack.us/ for more discussion.

Tracking might involve cookies and recording data such as the type of device, configuration, user location & IP address, and using this to serve targetted "behavioural advertising".

Extract from a web server log file showing the user's IP address recorded together with other data such as the requested URL

Cleaning your own web server log files of tracking data is not going to be enough.

Extract from an amended web server log file where the user's IP address, user agent and referrer are not recorded

Any system that receives HTTP requests, including advertisers would have to honour the setting. But this does not just apply to advertisements. Look for anything hosted on another system such as:

  • inline content (news, images, videos)
  • JavaScript libraries
  • trust seals (e.g. SSL certificate verification, privacy seals, trust schemes, tested for security, etc)
  • web analytics
  • widgets (e.g. buttons).

And these might be server from your own systems, not just third parties. Personal data may also be stored by the application on the user's own system locally. The question also arises what exactly constitutes "tracking", and whether audit and security event logs would be considered as tracking. Even traffic management and anti denial-of-service (DoS) systems track users to a degree, as of course does session management. Practices necessary to perform the designated service are likely to be acceptable, providing data are not kept indefinitely, and it is the usage for purposes which the user might not have expected, which is the real concern. Do users "expect" seceuity event logging? The examples on the http://donottrack.us/ discuss prevention of logging in web server log files, not other usages such as session management or incident monitoring. So guidance will be required.

The issue of tracking and personal data leakage may come as a surprise to some web site owners, and it was to the NHS, but is rather old news really. Like knowing all the components that make up your web site, and all the allowable entry points, knowing where you are sending data really is a baseline information requirement. The NHS example relates primarily to leakage of sensitive data to other parties, and the Obama example to security risks (see extended explanation.) The results of the FTC's final report next year will affect internet users worldwide as similar policies are likely to be adopted in other countries.

The FTC is asking for comments on its proposals by 31st January 2011. Their own suggested questions are listed in Appendix A of the report.

Posted on: 17 December 2010 at 12:30 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

14 December 2010

Trust .UK

Service and ownership location can be fundamental selection factors for online users. The importance of confidence in the UK's digital economy was highlighted in the Digital Britain Report (2009) and more recently in the Cabinet Office's Cyber Security Strategy of the United Kingdom/Fact Sheet on Cyber Security and EURIM's Can Society Afford to Rely on Security by Afterthought Not Design?. Building trust in the .UK brand is a necessary part of a healthy competitive economy.

Hand-in-hand with creating "the best place to do online business", the UK needs to increase visibility of the geographical properties of its online organisations. I wonder if in time, we will have more "country of origin" information, like the security labelling mentioned on Friday, to help users (employees, customers, clients, and citizens) make informed choices about who they will share information with and buy products & services from.

A ruling last week by the EU Court of Justice suggests that companies directing their activities at foreign consumers, will affect where they can take legal action, or have legal action taken against them. "Directing" may include using a domain name of another country, using a .com domain name, quoting international contact details, mentioning country names (e.g. delivery rates) or offering country/lanuague options.

We have already seen moves to ensure .uk domains are not used for criminal activities, but is the domain name enough? No. Without getting jingoistic, as the ruling indicates, there are all sorts of additional geographical properties that affect users' rights and the ability for governments to enforce legislation. An equivalent to the "Security Facts" label might be "Location Facts":

Two example 'Location facts' labels side by side - each has the type (web application); date (14 December 2010); URL; application country of server hosting, domain name registrar, domain name servers, data storage/transfers, payment processor; organisation legal name; country of registered office address, holding company, trading address, corporation tax and primary bank account; consumer product delivery countries, terms and conditions jurisdiction and safety standards - one is predominantly 'GB' although it is hosted in Germany (DE) and the other shows an international company based in a tax haven

where "GB" is the ISO 3166-1-alpha-2 code for the United Kingdom. In the example on the left, the application is hosted in Germany and has some data transfers to the United States because of web analytics, SSL verification and inline advertisement code hosted there. Of course, the situation can be complex, and in a single label it can be difficult to describe all the important geographical properties, but let's at least try. Knowing the locations of each element of the supply chain is less relevant to the end user than the details above.

These two examples are just made up to emphasize the possibilities and are not meant to be xenophobic in any way. Consumers, and other web product users, should be able to find out who they may be interacting with and the scope for redress in the event of a problem, so they can make their own choice? This is no different to having the place of origin on food product labelling.

This quotation from the forward by James Paice, Minister of State for Agriculture and Food, to the British Retail Consortium's new guide on Principles on Country of Origin Information couldn't explain it better:

Championing the practices of the best performers and bringing others into line will reduce confusion and ensure improvements in both the quality and consistency of origin information for all consumers.

It would seem to be as appropriate for web products too. Of course, there are many other issues that affect user trust—jurisdiction being just one.

Can we trust self-labelling? Well for consumers, the Advertising Standards Authority's remit will include digital assets like web sites from 1st March 2011. Honesty plays a big part too, but consumer groups (e.g. the Confidence Code for energy comparison web sites) have some punch, and trade associations could develop standards for their members. Ultimately, the power of markets and groups of individuals would hopefully keep other organisations in check.

Posted on: 14 December 2010 at 08:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

02 July 2010

Web Site Security Basics for SMEs

Sometimes when I'm out socially and people ask what I do, the conversation progresses to concerns about their own web site. They may have a hobby site, run a micro-business or be a manager or director of a small and medium-sized enterprise (SME)—there's all sorts of great entrepreneurial activity going on.

It is very common for SMEs not to have much time or budget for information security, and the available information can be poor or inappropriate (ISSA-UK, under the guidance of their Director of Research David Lacey, is trying to improve this). But what can SMEs do about their web presence—and it is very unusual not to have a web site, whatever the size of business.

Photograph of a waste skip at the side of St John Street in Clerkenwell, London, UK, with the company's website address written boldly across it

Last week I was asked "Is using <company> okay for taking online payments?" and then "what else should I be doing?". Remember we are discussing protection of the SME's own web site, not protecting its employees from using other sites. If I had no information about the business or any existing web security issues, this is what I recommend checking and doing before anything else:

  • Obtain regular backup copies of all data that changes (e.g. databases, logs, uploaded files) and store these securely somewhere other than the host servers. This may typically be daily, but the frequency should be selected based on how often data changes and how much data the SME might be prepared to lose in the event of total server failure.
    • check backup data can read and restored periodically
    • don't forget to securely delete data from old backups when they are no longer required
  • Use a network firewall in front of the web site to limit public (unauthenticated user) access to those ports necessary to access the web site. If other services are required remotely, use the firewall to limit from where (e.g. IP addresses) these can be used.
    • keep a record of the firewall configuration up-to-date
    • limit who can make changes to the firewall
  • Ensure the host servers are fully patched (e.g. operating system, services, applications and supporting code), check all providers for software updates regularly and allow time for installing these.
    • remove or disable all unnecessary services and other software
    • delete old, unused and backup files from the host servers
  • Identify all accounts (log in credentials) that provide server access (not just normal web page access), such as used for transferring files, accessing administrative interfaces (e.g. CMS admin, database and server management/configuration control panels) and using remote desktop. Change the passwords. Keep a record of who has access and remove accounts that are no longer required and enable logging for all access using these accounts.
    • restrict what each account can do as much as possible
    • add restrictions to the use of these accounts (e.g. limit access by IP address, require written approval for use, keep account disabled by default)
  • Check that every agreement with third parties that are required to operate the web site are in the organisation's own name. These may include the registration of domain names, SSL certificates, hosting contracts, monitoring services, data feeds, affiliate marketing agreements and service providers such as for address look-up, credit checks and making online payments.
    • ensure the third parties have the organisation's official contact details, and not those of an employee or of the site's developers
    • make note of any renewal dates
  • Obtain a copy of everything required for the web site including scripts, static files, configuration settings, source code, account details and encryption keys. Keep this updated with changes as they are made.
    • verify who legally owns the source code, designs, database, photographs, etc.
    • check what other licences affect the web site (e.g. use of open source and proprietary software libraries, database use limitations).

Do what you can, when you can. Once those are done, then:

  • Verify the web site and all its components (e.g. web widgets and other third party code/content) does not include common web application vulnerabilities that can be exploited by attackers (e.g. SQL injection, cross-site scripting).
  • Check what obligations the organisation is under to protect business and other people's data such as the Data Protection Act, guidance from regulators, trade organisation rules, agreements with customers and other contracts (e.g. PCI DSS via the acquiring bank).
    • impose security standards and obligations on suppliers and partner organisations
    • keep an eye open for changes to business processes that affect data
  • Document (even just some short notes) the steps to rebuild the web site somewhere else, and to transfer all the data and business processes to the new site.
    • include configuration details and information about third-party services required
    • think about what else will need to be done if the web site is unavailable (does it matter, if so what exactly is important?)
  • Provide information to the web site's users how to help protect themselves and their data.
    • point them to relevant help such as from GetSafeOnline, CardWatch and Think U Know
    • provide easy methods for them to contact the organisation if they think there is a security or privacy problem
  • Monitor web site usage behaviour (e.g. click-through rate, session duration, shopping cart abandonment rate, conversion rate), performance (e.g. uptime, response times) and reputation (e.g. malware, phishing, suspicious applications, malicious links) to gather trend data and identify unusual activity.
    • web server logs are a start, but customised logging is better
    • use reputable online tools (some of which are free) to help.

That's just the basics. So, what would be next for an SME? If the web site is a significant sales/engagement channel, the organisation has multiple web sites, is in a more regulated sector or one that is targetted particularly by criminals (e.g. gaming, betting and financial), takes payments or does other electronic commerce, allows users to add their own content or processes data for someone else, the above is just the start. Those SMEs probably need to be more proactive.

This helps to protect the SME's business information, but also helps to protect the web site users and their information. After all, the users are existing and potential customers, clients and citizens.

Oh, the best response I had to someone when I was explaining my work: "You're an anti-hacker than?". Well, I suppose so, but it's not quite how I'd describe it.

Any comments or suggestions?

Posted on: 02 July 2010 at 08:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

01 January 2010

NAI Code Compliance Report 2009

Following on from Tuesday's topic of terms and conditions for interactive advertising, the US Network Advertising Initiative (NAI) has just released their 2009 compliance report.

Members that collect, transfer, or store data for use in OBA [Online Behavioral Advertising], Multi-Site Advertising and/or Ad Delivery & Reporting shall provide reasonable security for that data.

The NAI is an association of 35 US advertising networks, data exchanges, and marketing analytics services providers including Advertising.com, Google and Yahoo.

The NAI Compliance Report 2009 discusses compliance by its members with its self-regulatory code of conduct governing the collection, use, and disclosure of data for online advertising services by its member companies (the NAI Code). The NAI Code has its own definition of personally identifiable information (PII) and sensitive information and its own protection principles. The NAI found its members to be broadly in compliance with the code, apart from ten members that did not disclose specific retention periods for data collected.

Whatever your views of behavioural advertising, industry initiatives like this to improve, and report on, standards are a welcome contribution. No doubt the code will evolve over time, but it is a good starting point. The code perhaps lacks requirements for measuring the accuracy of data or requiring ways for consumers to correct information about themselves, and it would be useful to know what checks are being undertaken as part of the audit. For example "Reasonable security is determined in light of several factors including, but not limited to, the sensitivity of the data, the nature of a company's business operations, the types of risks a company faces, and the reasonable protections available to a company" could be interpreted in a number of ways and some guidance on what is "reasonable" both from the organisation and individual's points of view would be welcome.

P.S. Happy new year.

Posted on: 01 January 2010 at 14:57 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

25 October 2009

From Whiteboard to Web Application

Sometimes finding all the web applications in an organisation can be the difficult part in trying to assess what risks exist.

Transport for London don't just have web sites and, I suspect, an intranet. They have been gradually moving from whiteboards for live underground travel news at tube stations:

Photograph of a transport information board at Great Portland Street station where the information is provided on magnetic tiles and by hand written wipe-dry pens

And now have electronic versions:

Photograph of a transport information board at Farringdon station where the information is provided on an LCD or plasma display

I don't know what technology is being used here, but other information boards have been seen to display web browser error messages leaking network information:

Photograph of a transport information display showing an 'address not found' error message from Firefox

But, what about elsewhere? I saw this on the live electronic advertisement boards at Bond Street station this weekend:

Photograph of an advertisement display board at Bond Street station elevators showing the words 'System Name' followed by a code and what looks like an IP address, written vertically up the portrait-orientated unit

Sorry it's a bit blurred, but I was going up the escalator at the time. Several, but not all the displays had their system names shown rather than an advertisement. It certainly looks like an IP address, but is there a web application inside? I've previously highlighted other information systems and displays that seem to be IP-enabled.

An investigation of your network, examining what is listening on which ports, and correlating this with the actual network traffic, might reveal more web applications than you thought.

Posted on: 25 October 2009 at 18:46 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

29 September 2009

IP Address Restrictions and Exceptions

It's common for access to some web sites to be restricted to users from particular Internet Protocol (IP) addresses. This is usually in addition to some other identification and authentication method. But other IP addresses are often added to this "allow list" and these should not necessarily be trusted in the same way.

Photograph of a sign with an exclamation mark on a yellow triangle that reads 'Caution - Traffic management Trial - DO NOT MOVE' on a construction site boundary's wire barrier

In a typical scenario, a web site hosted on the internet that is used to administer another web application might be restricted to the company's own IP addresses. Then the developers say they need to check something on the live site, or another server needs to index the content, or someone wants to work from home for a while, or the site needs to be demonstrated at a client's location. All these additional IP addresses are added to the "allow list". These restrictions may be being applied at a network firewall, traffic management system, at the web server, in the application itself, in intrusion detection systems or in log analytical software, or in many of these. These are difficult to manage and in time there will be many IP addresses that no-one knows why they are allowed unless they are carefully documented, and subject to a fixed time limit when they are confirmed again by an appropriate person or removed. These extra addresses are quite often hard for someone else to guess.

However, there is another area where IP addresses are added to "allow lists", and this is for remote monitoring and testing services. These might be checking uptime, response times, content changes, HTML validation or security testing. The service providers publish the IP addresses of the source systems so that companies can specifically allow access to their web sites. Since the number of these services is relatively small, it's not too difficult to find which one might give access to areas of a web site or web application that the public (and malicious people) should not be able to get to. The particular danger here is that the IP addresses might be excluded from monitoring and logging, and therefore even a diligent web site manager might not realise for example the uptime monitoring service is making unusual, or excessive, requests.

Although it is not likely a malicious person is using this "trusted" address unless routing has been compromised as well, problems can go undetected, from what might seem to be a legitimate source. The IP address may have been typed incorrectly, or worse, the restrictions/exceptions may not have been implemented correctly allowing more addresses to have the privileged access than intended. Not logging a user's session is privileged access.

Allow traffic through, but be very specific what is allowed and monitor what's going on. Review all the exceptions periodically. Be especially careful about anything that bypasses authentication (such as allowing a search engine to crawl restricted-access content) on an otherwise public site.

Posted on: 29 September 2009 at 10:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

25 August 2009

User Analytics and Tracking

A recent proposed revision of the policy on web tracking technologies for US federal web sites by the Office of Management and Budget set out four principles regarding user analytics and tracking.

  • Adhere to all existing laws and policies (including those designed to protect privacy) governing the collection, use, retention, and safeguarding of any data gathered from users.
  • Post clear and conspicuous notice on the website of the use of web tracking technologies.
  • Provide a clear and understandable means for a user to opt-out of being tracked.
  • Not discriminate against those users who decide to opt-out, in terms of their access to information.

The document recommends avoiding outsourced tracking and outsourced data analysis—issues not thought about by many organisations. Just because a third-party service is cheap, doesn't necessarily mean it's the appropriate method to use. I'm less convinced about the example of using cookies to record opt-outs.

The proposed revision attracted a well-considered joint response from the Center for Democracy & Technology and the Electronic Frontier Foundation. They suggested three additional principles.

  • Limit use of tracking data.
  • Limit retention of tracking data.
  • Obtain third-party verification.

The response also referenced their May 2009 Open Recommendations for the Use of Web Measurement Tools on Federal Government Web Sites which recommended the following:

  • Use data only for measurement.
  • Prominently disclose.
  • Offer choice.
  • Limit data retention.
  • Limit cross-session measurement.
  • Obtain third-party verification.

Whilst none of the final guidelines will be mandatory outside the US federal sector, the issues raised are worth consideration by all commercial and non-commercial web sites. For example, the recommendations and principles above could be used to help guide a privacy impact assessment of an organisation's own use of web analytics and tracking technologies.

Posted on: 25 August 2009 at 08:37 hrs

Comments Comments (1) | Permalink | Send Send | Post to Twitter

More Entries

IP addresses : Application Security and Privacy
ISO/IEC 18004:2006 QR code for https://clerkendweller.uk

Page https://www.clerkendweller.uk/ip-addresses
Requested by on Friday, 27 November 2015 at 09:58 hrs (London date/time)

Please read our terms of use and obtain professional advice before undertaking any actions based on the opinions, suggestions and generic guidance presented here. Your organisation's situation will be unique and all practices and controls need to be assessed with consideration of your own business context.

Terms of use https://www.clerkendweller.uk/page/terms
Privacy statement https://www.clerkendweller.uk/page/privacy
© 2008-2015 clerkendweller.uk