Thursday, 28 April 2016

Customized Web Data Scraping Services

To understand your customers’ behaviour it is crucial to organize the scattered data into a single repository. There are experts today who can scrape websites to extract data and develop analytics. Data extraction is a major requisite for any small or large company that deals with a massive volume of information that is stored in a complex structure. Premium data mining services help in extracting and structuring data from structured as well as semi-structured documents found on the internet or in other data warehouses.

Companies dealing with a large amount of data on a regular basis may need to convert these set of data into useful information. In that case, web scraping services will come to help. The experts offering such services will ensure that none of the data is missed. Customized data extraction is carried out mostly on the customer databases in order to analyse their behaviour and demographic characteristics. Personalized services offer a whole lot of benefits, which are;

Ensure Data Quality

The experts use a custom data extractor in order to ensure that the data extracted are of high quality. More than forty percent of the websites change their structure every month. Thus, it can be difficult for you to monitor the websites. A customized data extraction service will allow you to concentrate on your business’ larger goals, instead of wasting your time in trying DIY web data extraction.

Availability of Custom Scraper Tool

A reputed web scraping service provider is expected to have custom scraper tool with which they can extract information from the data efficiently without missing on anyone. By using the tools they can even scrape the most complex data and can provide it in any format.

Avoid Possible Human Errors

While extracting so many data sometimes even the professional service providers can also miss out on data. However, with customized services there will be no possibility of human error. Besides, a lot of time and cost can be saved too.

Great Speed

The custom web scraping service provider with their efficient tools can work really fast to convert the large amount of data into analytics. Also, they are able to extract data from multiple resources. The extracted data will be further preserved into customized structured formats such as, Microsoft Database, Text, script, HTML, SQL script etc.

Update Website

Additionally, you will have the leverage to update the website with the latest price and filter search by skipping the data, which do not match the keyword.

Tailor-made services even allow the professionals to extract data from emails and some other communication channels efficiently. With these data you will be able to spot the essentials required to implement in your business to convert the visitors into your customers. Also, you can make your business marketing plans accordingly.

Custom website data scraping service assist companies to have access to various on-demand data that are scraped from web, depending on the individual needs. The experts offering end-to-end data extraction services can also help in preparing the analytics for your business.

 Source : http://www.web-parsing.com/blog/customized-web-data-scrapping-services

Monday, 25 April 2016

Data Extraction: Tips to Get Exemplary Results

Data extraction is a skill, the more you master it – more are the chances of having a lucid picture of the volatile market and getting better perceptive of constantly changing trends. Escalating volatility in the market and intensifying competition has been the most contributing factors that have led to the rise of data extraction and data mining.

Data extraction is primarily used by companies (large and small, alike) to collect data from a specific industry, or data related to targeted customers or about their competition in the market. In fact, it has become a primary tool for marketers to plan their moves for branding and promoting particular products or services. It helps a wide plethora of industrial sectors to find and learn about specific data, based on their requirements.

And now with the rise of internet, web scraping has emerged as an important aspect that contributes to your success – the success of your venture or organization. It processes the HTML of a Web page to obtain data and convert it into to another format (i.e. HTML to XML).

Various extraction tools form an integral part of data extraction and data scrapping. Following offers a brief outline of some of these tools:

Email Extraction – An email extractor tool is used to acquire the email ids from any dependable sources automatically

Screen Scrapping – Screen scraping is a practice of reading text information from a screen and collecting visual data, rather than analyzing data as done in web scraping.

Data Mining as name suggests is a process of gathering patterns from information. It basically transforms the information into formats like CSV, MS excels, HTML and so and so forth, depending to your requirements

Web Spider – A Web spider is a computer program which browses internet in a systematic, automated manner. It is used by many search engines in order to provide up-to-date data

It is often seen that while extracting data; many get lost into the labyrinth of confusion, data overabundance, along with a lot of weird and not-so-familiar terms. Proper handling of these may sound easy, however; when not executed with appropriate procedure and processes; it may bring in disastrous results.

This no way means that data mining is a rocket science which only a few gifted and skilled people can take up. All it requires is undivided attention, keen preparation, and training, so brace up yourself for an overview of some practical tips that can help in successful data extraction and give a boost to your business.

Identify your Business Goals!:

Get a clear perspective in mind as to what are your business goals.

Data extraction can be bifurcated into various branches; and one needs to choose it wisely, depending on the business goals. E.g. your primary requirement is to get email ids of potential clients to conduct an email campaign; and for that you certainly need an email extractor. Use of this tool assists in extracting the email ids from trustworthy sources automatically. It essentially collects business contacts from various web pages, text files, HTML files, or any other format without duplicating the email ids. So, if you are not sure what you want; even applying the best tools will be of no use!

A crystal clear mindset helps in better understanding of market scenario and thus helps in formulation of powerful and effective strategies to get desired outcomes. E.g., people dealing in real estate business, should have a vision for it and which area they want to target specifically. With a clear vision they can clearly spell out what you want and where it should be.

Set Realistic Expectations:

Upon identifying your business goals, make sure to check out that they are realistic and attainable! Unrealistic and unachievable targets are the real cause for the obstacles and frustrations in the future.

Since, there are various tools that are and can be employed to extract data; vague or unclear goals make it difficult to determine which tool can be applied.

This crystal clear mindset; will help you give that insight about the direction your business is headed to.

Moreover, you can determine which method can be used to get excellent results. You can get a lucid picture of the past and present of your competitors and therefore helps in setting targets based on the others’ experiences. It is usually a wise move to set expectations that you have not achieved before.

Appoint Skilled Data Miner:

Skilled data miner with excellent data mining skills will reduce the painstaking and tiresome process of planning, devising and preparation.

For fresh start-ups, you can go ahead with the standard procedure however; if you have ample professionals at your disposal, pick up the right one who is not only knowledgeable but also reliable and sincere towards the task.

Prevent Data Deposits:

Being dead-sure of what you really want will help you avoid unnecessary data deposition.

Data mining just like real mining is a skill to know where the real treasure lies and being able to get it in the most efficient and effective way.

Being able to spot on authenticated & reliable resources, well researched information is what gives a short cut to locate the right and exact data.

If you are aimlessly opening every website; the results are bound to be ambiguous and would ultimately be a waste of time and effort.


Source:  http://www.habiledata.com/blog/data-extraction-is-not-a-rocket-science-follow-these-4-tips-to-get-exemplary-results

Friday, 3 July 2015

Scraping data from a list of web pages using Google Docs

Quite often when you’re looking for data as part of a story, that data will not be on a single page, but on a series of pages. To manually copy the data from each one – or even scrape the data individually – would take time. Here I explain a way to use Google Docs to grab the data for you.

Some basic principles

Although Google Docs is a pretty clumsy tool to use to scrape webpages, the method used is much the same as if you were writing a scraper in a programming language like Python or Ruby. For that reason, I think this is a good quick way to introduce the basics of certain types of scrapers.

Here’s how it works:

Firstly, you need a list of links to the pages containing data.

Quite often that list might be on a webpage which links to them all, but if not you should look at whether the links have any common structure, for example “http://www.country.com/data/australia” or “http://www.country.com/data/country2″. If it does, then you can generate a list by filling in the part of the URL that changes each time (in this case, the country name or number), assuming you have a list to fill it from (i.e. a list of countries, codes or simple addition).

Second, you need the destination pages to have some consistent structure to them. In other words, they should look the same (although looking the same doesn’t mean they have the same structure – more on this below).

The scraper then cycles through each link in your list, grabs particular bits of data from each linked page (because it is always in the same place), and saves them all in one place.

Scraping with Google Docs using =importXML – a case study

If you’ve not used =importXML before it’s worth catching up on my previous 2 posts How to scrape webpages and ask questions with Google Docs and =importXML and Asking questions of a webpage – and finding out when those answers change.

This takes things a little bit further.

In this case I’m going to scrape some data for a story about local history – the data for which is helpfully published by the Durham Mining Museum. Their homepage has a list of local mining disasters, with the date and cause of the disaster, the name and county of the colliery, the number of deaths, and links to the names and to a page about each colliery.

However, there is not enough geographical information here to map the data. That, instead, is provided on each colliery’s individual page.

So we need to go through this list of webpages, grab the location information, and pull it all together into a single list.

Finding the structure in the HTML

To do this we need to isolate which part of the homepage contains the list. If you right-click on the page to ‘view source’ and search for ‘Haig’ (the first colliery listed) we can see it’s in a table that has a beginning tag like so: <table border=0 align=center style=”font-size:10pt”>

We can use =importXML to grab the contents of the table like so:

=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]“)

But we only want the links, so how do we grab just those instead of the whole table contents?

The answer is to add more detail to our request. If we look at the HTML that contains the link, it looks like this:

<td valign=top><a href=”http://www.dmm.org.uk/colliery/h029.htm“>Haig&nbsp;Pit</a></td>

So it’s within a <td> tag – but all the data in this table is, not surprisingly, contained within <td> tags. The key is to identify which <td> tag we want – and in this case, it’s always the fourth one in each row.

So we can add “//td[4]” (‘look for the fourth <td> tag’) to our function like so:

=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]“)

Now we should have a list of the collieries – but we want the actual URL of the page that is linked to with that text. That is contained within the value of the href attribute – or, put in plain language: it comes after the bit that says href=”.

So we just need to add one more bit to our function: “//@href”:

=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]//@href”)

So, reading from the far right inwards, this is what it says: “Grab the value of href, within the fourth <td> tag on every row, of the table that has a style value of font-size:10pt”

Note: if there was only one link in every row, we wouldn’t need to include //td[4] to specify the link we needed.

Scraping data from each link in a list

Now we have a list – but we still need to scrape some information from each link in that list

Firstly, we need to identify the location of information that we need on the linked pages. Taking the first page, view source and search for ‘Sheet 89′, which are the first two words of the ‘Map Ref’ line.

The HTML code around that information looks like this:

<td valign=top>(Sheet 89) NX965176, 54° 32' 35" N, 3° 36' 0" W</td>

Looking a little further up, the table that contains this cell uses HTML like this:

<table border=0 width=”95%”>

So if we needed to scrape this information, we would write a function like this:

=importXML(“http://www.dmm.org.uk/colliery/h029.htm”, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)

…And we’d have to write it for every URL.

But because we have a list of URLs, we can do this much quicker by using cell references instead of the full URL.

So. Let’s assume that your formula was in cell C2 (as it is in this example), and the results have formed a column of links going from C2 down to C11. Now we can write a formula that looks at each URL in turn and performs a scrape on it.

In D2 then, we type the following:

=importXML(C2, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)

If you copy the cell all the way down the column, it will change the function so that it is performed on each neighbouring cell.

In fact, we could simplify things even further by putting the second part of the function in cell D1 – without the quotation marks – like so:

//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]

And then in D2 change the formula to this:

=ImportXML(C2,$D$1)

(The dollar signs keep the D1 reference the same even when the formula is copied down, while C2 will change in each cell)

Now it works – we have the data from each of 8 different pages. Almost.

Troubleshooting with =IF

The problem is that the structure of those pages is not as consistent as we thought: the scraper is producing extra cells of data for some, which knocks out the data that should be appearing there from other cells.

So I’ve used an IF formula to clean that up as follows:

In cell E2 I type the following:

=if(D2=””, ImportXML(C2,$D$1), D2)

Which says ‘If D2 is empty, then run the importXML formula again and put the results here, but if it’s not empty then copy the values across‘

That formula is copied down the column.

But there’s still one empty column even now, so the same formula is used again in column F:

=if(E2=””, ImportXML(C2,$D$1), E2)

A hack, but an instructive one

As I said earlier, this isn’t the best way to write a scraper, but it is a useful way to start to understand how they work, and a quick method if you don’t have huge numbers of pages to scrape. With hundreds of pages, it’s more likely you will miss problems – so watch out for inconsistent structure and data that doesn’t line up.

Source: http://onlinejournalismblog.com/2011/10/14/scraping-data-from-a-list-of-webpages-using-google-docs/

Thursday, 25 June 2015

Data Scraping - What Are Hand-Scraped Hardwood Floors and What Are the Benefits?

If you love the look of hardwood flooring with lots of character, then you may want to check out hand-scraped hardwood flooring. Hand-scraped wood provides a warm vintage look, providing the floor instant character. These types of scraped hardwoods are suitable for living rooms, dining rooms, hallways and bedrooms. But what exactly is hand-scraped hardwood flooring?

Well, it is literally what you think it is. Hand-scraped hardwood flooring is created by hand using specialized wood working tools to make each board unique and giving an overall "old worn" appearance.

At Innovation Builders we offer solid wood floors finished on site with an actual hand-scraping technique followed by stain and sealer. Solid wood floors are installed by an expert team of technicians who work each board with skilled craftsman-like attention to detail. Following the scraping procedure the floor is stained by hand with a customer selected stain color, and then protected with multiple coats of sealing and finishing polyurethane. This finishing process of staining, sealing and coating the wood floors contributes to providing the look and durability of an old reclaimed wood floor, but with today's tough, urethane finishes.

There are many, many benefits to hand-scraped wood flooring. Overall, these floors are extremely durable and hard wearing, providing years of trouble-free use. These wood floors remain looking newer for longer because the texture that the process provides hides the typical dents, dings and scratches that other floors can't hide so easily. That's great news for households with kids, dogs, and cats.

These types of wood flooring have another unique advantage as well. When you do scratch these floors during their lifetime, the scratches are easily repaired. As long as the scratch isn't too deep you can make them practically disappear without ever having to hire a professional. It's simple to hide the scratch by using a color-matched stain marker or repair kit that is readily available through local flooring distributors. These features make hand-scraped hardwood flooring a lot more durable and hassle-free to maintain than other types of wood flooring.

The expert processes utilized in the creation of these floors provides a custom look of worn wood with deep color and subtle highlights. When the light hits the wood at different times during the day, it provides an understated but powerful effect of depth and beauty. They instantly offer your rooms a rustic look full of character, allowing your home to become a warm and inviting environment. The rustic look of this wood provides a texture, style and rustic appeal that cannot be matched by any other type of flooring.

Hand-Scraped Hardwood Flooring is a floor that says welcome and adds a touch of elegance to any home. If you are looking to buy a new home and you haven't had the opportunity to see or feel hand scraped hardwoods, stop in any of the model homes at Innovation Builders in Keller, North Richland Hills or Grand Prairie, Texas and check it out!

Source: http://ezinearticles.com/?What-Are-Hand-Scraped-Hardwood-Floors-and-What-Are-the-Benefits?&id=6026646

Saturday, 20 June 2015

Web scraping in under 60 seconds: the magic of import.io

Import.io is a very powerful and easy-to-use tool for data extraction that has the aim of getting data from any website in a structured way. It is meant for non-programmers that need data (and for programmers who don’t want to overcomplicate their lives).

I almost forgot!! Apart from everything, it is also a free tool (o_O)

The purpose of this post is to teach you how to scrape a website and make a dataset and/or API in under 60 seconds. Are you ready?

It’s very simple. You just have to go to http://magic.import.io; post the URL of the site you want to scrape, and push the “GET DATA” button. Yes! It is that simple! No plugins, downloads, previous knowledge or registration are necessary. You can do this from any browser; it even works on tablets and smartphones.

For example: if we want to have a table with the information on all items related to Chewbacca on MercadoLibre (a Latin American version of eBay), we just need to go to that site and make a search – then copy and paste the link (http://listado.mercadolibre.com.mx/chewbacca) on Import.io, and push the “GET DATA” button.

You’ll notice that now you have all the information on a table, and all you need to do is remove the columns you don’t need. To do this, just place the mouse pointer on top of the column you want to delete, and an “X” will appear.

Good news for those of us who are a bit more technically-oriented! There is a button that says “GET API” and this one is good to, well, generate an API that will update the data on each request. For this you need to create an account (which is also free of cost).

As you saw, we can scrape any website in under 60 seconds, even if it includes tons of results pages. This truly is magic, no? For more complex things that require logins, entering subwebs, automatized searches, et cetera, there is downloadable import.io software… But I’ll explain that in a different post.

Source: http://schoolofdata.org/2014/12/09/web-scraping-in-under-60-seconds-the-magic-of-import-io/

Tuesday, 9 June 2015

Web Scraping Services : Making Modern File Formats More Accessible

Data scraping is the process of automatically sorting through information contained on the internet inside html, PDF or other documents and collecting relevant information to into databases and spreadsheets for later retrieval. On most websites, the text is easily and accessibly written in the source code but an increasing number of businesses are using Adobe PDF format (Portable Document Format: A format which can be viewed by the free Adobe Acrobat software on almost any operating system. See below for a link.). The advantage of PDF format is that the document looks exactly the same no matter which computer you view it from making it ideal for business forms, specification sheets, etc.; the disadvantage is that the text is converted into an image from which you often cannot easily copy and paste. PDF Scraping is the process of data scraping information contained in PDF files. To PDF scrape a PDF document, you must employ a more diverse set of tools.

There are two main types of PDF files: those built from a text file and those built from an image (likely scanned in). Adobe's own software is capable of PDF scraping from text-based PDF files but special tools are needed for PDF scraping text from image-based PDF files. The primary tool for PDF scraping is the OCR program. OCR, or Optical Character Recognition, programs scan a document for small pictures that they can separate into letters. These pictures are then compared to actual letters and if matches are found, the letters are copied into a file. OCR programs can perform PDF scraping of image-based PDF files quite accurately but they are not perfect.

Once the OCR program or Adobe program has finished PDF scraping a document, you can search through the data to find the parts you are most interested in. This information can then be stored into your favorite database or spreadsheet program. Some PDF scraping programs can sort the data into databases and/or spreadsheets automatically making your job that much easier.

Quite often you will not find a PDF scraping program that will obtain exactly the data you want without customization. Surprisingly a search on Google only turned up one business, that will create a customized PDF scraping utility for your project. A handful of off the shelf utilities claim to be customizable, but seem to require a bit of programming knowledge and time commitment to use effectively. Obtaining the data yourself with one of these tools may be possible but will likely prove quite tedious and time consuming. It may be advisable to contract a company that specializes in PDF scraping to do it for you quickly and professionally.

Let's explore some real world examples of the uses of PDF scraping technology. A group at Cornell University wanted to improve a database of technical documents in PDF format by taking the old PDF file where the links and references were just images of text and changing the links and references into working clickable links thus making the database easy to navigate and cross-reference. They employed a PDF scraping utility to deconstruct the PDF files and figure out where the links were. They then could create a simple script to re-create the PDF files with working links replacing the old text image.

A computer hardware vendor wanted to display specifications data for his hardware on his website. He hired a company to perform PDF scraping of the hardware documentation on the manufacturers' website and save the PDF scraped data into a database he could use to update his webpage automatically.

PDF Scraping is just collecting information that is available on the public internet. PDF Scraping does not violate copyright laws.

PDF Scraping is a great new technology that can significantly reduce your workload if it involves retrieving information from PDF files. Applications exist that can help you with smaller, easier PDF Scraping projects but companies exist that will create custom applications for larger or more intricate PDF Scraping jobs.

Source: http://ezinearticles.com/?PDF-Scraping:-Making-Modern-File-Formats-More-Accessible&id=193321

Tuesday, 2 June 2015

On-line directory tree webscraping

As you surf around the internet — particularly in the old days — you may have seen web-pages like this:

The former image is generated by Apache SVN server, and the latter is the plain directory view generated for UserDir on Apache.

In both cases you have a very primitive page that allows you to surf up and down the directory tree of the resource (either the SVN repository or a directory file system) and select links to resources that correspond to particular files.

Now, a file system can be thought of as a simple key-value store for these resources burdened by an awkward set of conventions for listing the keys where you keep being obstructed by the ‘/‘ character.

My objective is to provide a module that makes it easy to iterate through these directory trees and produce a flat table with the following helpful entries:

Although there is clearly redundant data between the fields url, abspath, fname, name, ext, having them in there makes it much easier to build a useful front end.

The function code (which I won’t copy in here) is at https://scraperwiki.com/scrapers/apache_directory_tree_extractor/. This contains the functions ParseSVNRevPage(url) and ParseSVNRevPageTree(url), both of which return dicts of the form:

{'url', 'rev', 'dirname', 'svnrepo',

 'contents':[{'url', 'abspath', 'fname', 'name', 'ext'}]}

I haven’t written the code for parsing the Apache Directory view yet, but for now we have something we can use.

I scraped the UK Cave Data Registry with this scraper which simply applies the ParseSVNRevPageTree() function to each of the links and glues the output into a flat array before saving it:

lrdata = ParseSVNRevPageTree(href)

ldata = [ ]

for cres in lrdata["contents"]:

    cres["svnrepo"], cres["rev"] = lrdata["svnrepo"], lrdata["rev"]

    ldata.append(cres)

scraperwiki.sqlite.save(["svnrepo", "rev", "abspath"], ldata)

Now that we have a large table of links, we can make the cave image file viewer based on the query:

select abspath, url, svnrepo from swdata where ext=’.jpg’ order by abspath limit 500

By clicking on a reference to a jpg resource on the left, you can preview what it looks like on the right.

If you want to know why the page is muddy, a video of the conditions in which the data was gathered is here.

Image files are usually the most immediately interesting out of any unknown file system dump. And they can be made more interesting by associating meta-data with them (given that no convention for including interesting information in the EXIF sections of their file formats). This meta-data might be floating around in other files dumped into the same repository — eg in the form of links to them from html pages which relate to picture captions.

But that is a future scraping project for another time.

Source: https://scraperwiki.wordpress.com/2012/09/14/on-line-directory-tree-webscraping/