Quite often when you’re looking for data as part of a story, that data will not be on a single page, but on a series of pages. To manually copy the data from each one – or even scrape the data individually – would take time. Here I explain a way to use Google Docs to grab the data for you.
Some basic principles
Although Google Docs is a pretty clumsy tool to use to scrape webpages, the method used is much the same as if you were writing a scraper in a programming language like Python or Ruby. For that reason, I think this is a good quick way to introduce the basics of certain types of scrapers.
Here’s how it works:
Firstly, you need a list of links to the pages containing data.
Quite often that list might be on a webpage which links to them all, but if not you should look at whether the links have any common structure, for example “http://www.country.com/data/australia” or “http://www.country.com/data/country2″. If it does, then you can generate a list by filling in the part of the URL that changes each time (in this case, the country name or number), assuming you have a list to fill it from (i.e. a list of countries, codes or simple addition).
Second, you need the destination pages to have some consistent structure to them. In other words, they should look the same (although looking the same doesn’t mean they have the same structure – more on this below).
The scraper then cycles through each link in your list, grabs particular bits of data from each linked page (because it is always in the same place), and saves them all in one place.
Scraping with Google Docs using =importXML – a case study
If you’ve not used =importXML before it’s worth catching up on my previous 2 posts How to scrape webpages and ask questions with Google Docs and =importXML and Asking questions of a webpage – and finding out when those answers change.
This takes things a little bit further.
In this case I’m going to scrape some data for a story about local history – the data for which is helpfully published by the Durham Mining Museum. Their homepage has a list of local mining disasters, with the date and cause of the disaster, the name and county of the colliery, the number of deaths, and links to the names and to a page about each colliery.
However, there is not enough geographical information here to map the data. That, instead, is provided on each colliery’s individual page.
So we need to go through this list of webpages, grab the location information, and pull it all together into a single list.
Finding the structure in the HTML
To do this we need to isolate which part of the homepage contains the list. If you right-click on the page to ‘view source’ and search for ‘Haig’ (the first colliery listed) we can see it’s in a table that has a beginning tag like so: <table border=0 align=center style=”font-size:10pt”>
We can use =importXML to grab the contents of the table like so:
=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]“)
But we only want the links, so how do we grab just those instead of the whole table contents?
The answer is to add more detail to our request. If we look at the HTML that contains the link, it looks like this:
<td valign=top><a href=”http://www.dmm.org.uk/colliery/h029.htm“>Haig Pit</a></td>
So it’s within a <td> tag – but all the data in this table is, not surprisingly, contained within <td> tags. The key is to identify which <td> tag we want – and in this case, it’s always the fourth one in each row.
So we can add “//td[4]” (‘look for the fourth <td> tag’) to our function like so:
=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]“)
Now we should have a list of the collieries – but we want the actual URL of the page that is linked to with that text. That is contained within the value of the href attribute – or, put in plain language: it comes after the bit that says href=”.
So we just need to add one more bit to our function: “//@href”:
=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]//@href”)
So, reading from the far right inwards, this is what it says: “Grab the value of href, within the fourth <td> tag on every row, of the table that has a style value of font-size:10pt”
Note: if there was only one link in every row, we wouldn’t need to include //td[4] to specify the link we needed.
Scraping data from each link in a list
Now we have a list – but we still need to scrape some information from each link in that list
Firstly, we need to identify the location of information that we need on the linked pages. Taking the first page, view source and search for ‘Sheet 89′, which are the first two words of the ‘Map Ref’ line.
The HTML code around that information looks like this:
<td valign=top>(Sheet 89) NX965176, 54° 32' 35" N, 3° 36' 0" W</td>
Looking a little further up, the table that contains this cell uses HTML like this:
<table border=0 width=”95%”>
So if we needed to scrape this information, we would write a function like this:
=importXML(“http://www.dmm.org.uk/colliery/h029.htm”, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)
…And we’d have to write it for every URL.
But because we have a list of URLs, we can do this much quicker by using cell references instead of the full URL.
So. Let’s assume that your formula was in cell C2 (as it is in this example), and the results have formed a column of links going from C2 down to C11. Now we can write a formula that looks at each URL in turn and performs a scrape on it.
In D2 then, we type the following:
=importXML(C2, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)
If you copy the cell all the way down the column, it will change the function so that it is performed on each neighbouring cell.
In fact, we could simplify things even further by putting the second part of the function in cell D1 – without the quotation marks – like so:
//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]
And then in D2 change the formula to this:
=ImportXML(C2,$D$1)
(The dollar signs keep the D1 reference the same even when the formula is copied down, while C2 will change in each cell)
Now it works – we have the data from each of 8 different pages. Almost.
Troubleshooting with =IF
The problem is that the structure of those pages is not as consistent as we thought: the scraper is producing extra cells of data for some, which knocks out the data that should be appearing there from other cells.
So I’ve used an IF formula to clean that up as follows:
In cell E2 I type the following:
=if(D2=””, ImportXML(C2,$D$1), D2)
Which says ‘If D2 is empty, then run the importXML formula again and put the results here, but if it’s not empty then copy the values across‘
That formula is copied down the column.
But there’s still one empty column even now, so the same formula is used again in column F:
=if(E2=””, ImportXML(C2,$D$1), E2)
A hack, but an instructive one
As I said earlier, this isn’t the best way to write a scraper, but it is a useful way to start to understand how they work, and a quick method if you don’t have huge numbers of pages to scrape. With hundreds of pages, it’s more likely you will miss problems – so watch out for inconsistent structure and data that doesn’t line up.
Source: http://onlinejournalismblog.com/2011/10/14/scraping-data-from-a-list-of-webpages-using-google-docs/
Some basic principles
Although Google Docs is a pretty clumsy tool to use to scrape webpages, the method used is much the same as if you were writing a scraper in a programming language like Python or Ruby. For that reason, I think this is a good quick way to introduce the basics of certain types of scrapers.
Here’s how it works:
Firstly, you need a list of links to the pages containing data.
Quite often that list might be on a webpage which links to them all, but if not you should look at whether the links have any common structure, for example “http://www.country.com/data/australia” or “http://www.country.com/data/country2″. If it does, then you can generate a list by filling in the part of the URL that changes each time (in this case, the country name or number), assuming you have a list to fill it from (i.e. a list of countries, codes or simple addition).
Second, you need the destination pages to have some consistent structure to them. In other words, they should look the same (although looking the same doesn’t mean they have the same structure – more on this below).
The scraper then cycles through each link in your list, grabs particular bits of data from each linked page (because it is always in the same place), and saves them all in one place.
Scraping with Google Docs using =importXML – a case study
If you’ve not used =importXML before it’s worth catching up on my previous 2 posts How to scrape webpages and ask questions with Google Docs and =importXML and Asking questions of a webpage – and finding out when those answers change.
This takes things a little bit further.
In this case I’m going to scrape some data for a story about local history – the data for which is helpfully published by the Durham Mining Museum. Their homepage has a list of local mining disasters, with the date and cause of the disaster, the name and county of the colliery, the number of deaths, and links to the names and to a page about each colliery.
However, there is not enough geographical information here to map the data. That, instead, is provided on each colliery’s individual page.
So we need to go through this list of webpages, grab the location information, and pull it all together into a single list.
Finding the structure in the HTML
To do this we need to isolate which part of the homepage contains the list. If you right-click on the page to ‘view source’ and search for ‘Haig’ (the first colliery listed) we can see it’s in a table that has a beginning tag like so: <table border=0 align=center style=”font-size:10pt”>
We can use =importXML to grab the contents of the table like so:
=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]“)
But we only want the links, so how do we grab just those instead of the whole table contents?
The answer is to add more detail to our request. If we look at the HTML that contains the link, it looks like this:
<td valign=top><a href=”http://www.dmm.org.uk/colliery/h029.htm“>Haig Pit</a></td>
So it’s within a <td> tag – but all the data in this table is, not surprisingly, contained within <td> tags. The key is to identify which <td> tag we want – and in this case, it’s always the fourth one in each row.
So we can add “//td[4]” (‘look for the fourth <td> tag’) to our function like so:
=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]“)
Now we should have a list of the collieries – but we want the actual URL of the page that is linked to with that text. That is contained within the value of the href attribute – or, put in plain language: it comes after the bit that says href=”.
So we just need to add one more bit to our function: “//@href”:
=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]//@href”)
So, reading from the far right inwards, this is what it says: “Grab the value of href, within the fourth <td> tag on every row, of the table that has a style value of font-size:10pt”
Note: if there was only one link in every row, we wouldn’t need to include //td[4] to specify the link we needed.
Scraping data from each link in a list
Now we have a list – but we still need to scrape some information from each link in that list
Firstly, we need to identify the location of information that we need on the linked pages. Taking the first page, view source and search for ‘Sheet 89′, which are the first two words of the ‘Map Ref’ line.
The HTML code around that information looks like this:
<td valign=top>(Sheet 89) NX965176, 54° 32' 35" N, 3° 36' 0" W</td>
Looking a little further up, the table that contains this cell uses HTML like this:
<table border=0 width=”95%”>
So if we needed to scrape this information, we would write a function like this:
=importXML(“http://www.dmm.org.uk/colliery/h029.htm”, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)
…And we’d have to write it for every URL.
But because we have a list of URLs, we can do this much quicker by using cell references instead of the full URL.
So. Let’s assume that your formula was in cell C2 (as it is in this example), and the results have formed a column of links going from C2 down to C11. Now we can write a formula that looks at each URL in turn and performs a scrape on it.
In D2 then, we type the following:
=importXML(C2, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)
If you copy the cell all the way down the column, it will change the function so that it is performed on each neighbouring cell.
In fact, we could simplify things even further by putting the second part of the function in cell D1 – without the quotation marks – like so:
//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]
And then in D2 change the formula to this:
=ImportXML(C2,$D$1)
(The dollar signs keep the D1 reference the same even when the formula is copied down, while C2 will change in each cell)
Now it works – we have the data from each of 8 different pages. Almost.
Troubleshooting with =IF
The problem is that the structure of those pages is not as consistent as we thought: the scraper is producing extra cells of data for some, which knocks out the data that should be appearing there from other cells.
So I’ve used an IF formula to clean that up as follows:
In cell E2 I type the following:
=if(D2=””, ImportXML(C2,$D$1), D2)
Which says ‘If D2 is empty, then run the importXML formula again and put the results here, but if it’s not empty then copy the values across‘
That formula is copied down the column.
But there’s still one empty column even now, so the same formula is used again in column F:
=if(E2=””, ImportXML(C2,$D$1), E2)
A hack, but an instructive one
As I said earlier, this isn’t the best way to write a scraper, but it is a useful way to start to understand how they work, and a quick method if you don’t have huge numbers of pages to scrape. With hundreds of pages, it’s more likely you will miss problems – so watch out for inconsistent structure and data that doesn’t line up.
Source: http://onlinejournalismblog.com/2011/10/14/scraping-data-from-a-list-of-webpages-using-google-docs/