Wednesday 31 December 2014

Important Aspects Of Web Data Scraping

Have you ever heard of "data scraping?" Scraping Data scraping technology to new technology and a successful businessman who made his fortune by making use of the data.

Sometimes website owners automated harvesting of your data can not be happy. Webmasters tools or methods that the content of websites to find block certain IP addresses from using their websites to disallow web scrapers have learned.  Allen are ultimately left with is blocked.

Venus is a modern solution to the problem. Proxy data scraping technology solves the problem by using proxy IP addresses. Every time your data scraping program performs an output of a website, the website thinks that it comes from a different IP address. The owner of this website, the proxy data scraping only a short period of increased traffic from all over the world looks like. They are very limited and boring ways of blocking such a script, but more importantly - most of the time, but they will not know they are scraped.

Now you might be asking yourself, "I can get for my project where data scraping proxy technology?" "Do it yourself" solution, but unfortunately, not. Need to mention. The proxy server you choose to rent consider hosting providers, but that option is fairly pricey, but definitely better than the alternative is incredibly dangerous (but) free public proxy servers.

But the trick is finding them. Many sites list hundreds of servers, but one that works to identify, access, and supports the type of protocol you need perseverance, trial and error, a lesson. Ten first, you do not know which server belongs to or what activities going on a server somewhere. Through a public proxy sensitive requests or to send data is a bad idea.

Proxy data scraping for a less risky scenario is to rent a rotating proxy connection along a large number of private IP addresses. www.webdatascraping.us companies scale anonymous proxy solutions, but often have a fairly hefty setup costs to get you going.

After performing a simple Google search, I quickly scrape using anonymous data for a company that has access to the proxy server biedt.kon finish.

Different techniques and processes for collecting and analyzing data, and has developed over time. Web scraping for business on the market recently. It is a process from various sources, such as databases and web sites with large amounts of data provides.

It's good to clear the air and people know that the data is the legal process to scrape. In this case, the main reason is because the information or data that is already available on the internet. It is important to know that this is a process to steal information, but there is a process of gathering reliable information. Most people considered unsavory behavior techniques.

So we collect data from a variety of websites and databases, web scraping define a process. A process either manually or through the use of software that can be achieved. Data mining companies to web-extraction and web crawling process to increase has led to greater use. The other important task of such enterprises for processing and analyzing the data are harvested. One of the important aspects about these companies is that they are experts in service.

Source:http://www.articlesbase.com/outsourcing-articles/important-aspects-of-web-data-scraping-6160374.html

Saturday 27 December 2014

Scraping By

In his classic 1976 Chesapeake portrait, Beautiful Swimmers, William Warner described the scrape boat as "a workboat unlike any other I had ever seen on the Bay." Seeming half as wide as it was long, he said, it looked like a "a miniature battleship." There's a reason for that, of course. It's a classic case of form following function; the boat evolved for one purpose, to ply the Bay's grassy shallows for shedding blue crabs.

Said to "float on a heavy dew," scrape boats run from 26 to 30 feet long and 9 to 10 feet wide. The hull is a shallow-V deadrise that quickly flattens toward the stern, enabling the boat to pull its twin scrapes—rectangular steel frames, each with a trailing mesh bag—in knee-deep waters. The broad beam might sound ungainly, but the hull tapers toward the stern—betraying its sailboat origins. And it has a graceful sheer, flowing from a bow height of a few feet to little more than a foot above the water amidships.

And you want a low freeboard when you spend the whole day hoisting aboard scrapes, which weigh 50 pounds apiece, not including the load of sea grass and crabs that come in too. Low sides or not, there's a higher than average inci-dence of back problems among scrape boat crabbers. They spend long days bending in precisely the position back doctors say puts undue pressure on the lower back as they sort through rolls of grasses to pluck out the peelers and softies. And that alone may be why crab potting is now the far more common way of catching soft crabs.

Some people think that's good, assuming that dragging a scrape across the Bay's beleaguered grass flats must be destructive. But the smooth bar of the scrape, unlike a toothed dredge, doesn't uproot grasses. In fact, where scraping is traditional, the grass beds seem relatively resilient. I've often thought if Maryland and Virginia had stuck with scraping as the major legal way to soft-crab, overfishing might not have become a problem. Pots can be deployed everywhere and by the thousands, whereas scraping is limited to grass beds and to ground covered at three miles per hour; and even the sturdiest waterman can only pull two of them by hand. But peeler pots seem here to stay, and other soft crabbers have taken to using a single, large scrape operated from larger workboats by hydraulic power.

The bottom line is that these lovely, superbly functional expressions of Chesapeake crabbing culture now number only in the dozens, if you count working, wooden models. There are some fiberglass scrape boat hulls in service, and a Carolina skiff or two has been adapted for the task. They are functional, but have little art to them.

It is probably a sign of how fast scrape boats are going that the Smithsonian Institution recently took the lines off Darlene, a scraper worked by Morris Marsh of Smith Island, for its archives. You can see photos of scrape boats, and learn more about the 140-year old history of scraping, from Paula Johnson's fine book, The Workboats of Smith Island. Mr. Marsh, still going strong in his late 60s, is the scraper who took Warner out nearly 40 years ago when he was researching Beautiful Swimmers.

Indeed, scraping seems to win over those who master it. Marsh's father-in-law, Ed Harrison, scraped for almost 70 years, nearly wearing through the cross-planked bottom of his boat—from the inside—with decades of walking the planks, tending his scrapes. And an islander who scrapes with Marsh today, David Laird, says he is 71—one year younger than Scotty Boy, the scrape boat he took over from his dad in 1958. "I wouldn't even know how to crab in another boat," Laird says.

Soft crabs may well be caught—or farmed—a century from now on the Chesapeake; but no one will devise a way to take them so intimately and beautifully from the shallowest marsh edges and tiniest crevices in the shore as the scrapers do.

Source:http://www.articlesbase.com/culture-articles/scraping-by-1560919.html

Monday 22 December 2014

Scrape Web data using R

Plenty of people have been scraping data from the web using R for a while now, but I just completed my first project and I wanted to share the code with you.  It was a little hard to work through some of the “issues”, but I had some great help from @DataJunkie on twitter.

As an aside, if you are learning R and coming from another package like SPSS or SAS, I highly advise that you follow the hashtag #rstats on Twitter to be amazed by the kinds of data analysis that are going on right now.

One note.  When I read in my table, it contained a wierd set of characters.  I suspect that it is some sort of encoding, but luckily, I was able to get around it by recoding the data from a character factor to a number by using the stringr package and some basic regex expressions.

Bring on fantasy football!

################################################################

## Help from the followingn sources:

## @DataJunkie on twitter

## http://www.regular-expressions.info/reference.html

## http://stackoverflow.com/questions/1395528/scraping-html-tables-into-r-data-frames-using-the-xml-package

## http://stackoverflow.com/questions/1395528/scraping-html-tables-into-r-data-frames-using-the-xml-package

## http://stackoverflow.com/questions/2443127/how-can-i-use-r-rcurl-xml-packages-to-scrape-this-webpage

################################################################

library(XML)

library(stringr)

# build the URL

url <- paste("http://sports.yahoo.com/nfl/stats/byposition?pos=QB",

        "&conference=NFL&year=season_2009",
        "&timeframe=Week1", sep="")

# read the tables and select the one that has the most rows

tables <- readHTMLTable(url)

n.rows <- unlist(lapply(tables, function(t) dim(t)[1]))

tables[[which.max(n.rows)]]

# select the table we need - read as a dataframe

my.table <- tables[[7]]

# delete extra columns and keep data rows

View(head(my.table, n=20))

my.table <- my.table[3:nrow(my.table), c(1:3, 5:12, 14:18, 20:21, 23:24) ]

# rename every column

c.names <- c("Name", "Team", "G", "QBRat", "P_Comp", "P_Att", "P_Yds", "P_YpA", "P_Lng", "P_Int", "P_TD", "R_Att",

        "R_Yds", "R_YpA", "R_Lng", "R_TD", "S_Sack", "S_SackYa", "F_Fum", "F_FumL")

names(my.table) <- c.names

# data get read in with wierd symbols - need to remove - initially stored as character factors

# for the loops, I am manually telling the code which regex to use - assumes constant behavior

# depending on where the wierd characters are -- is this an encoding?

front <- c(1)

back <- c(4:ncol(my.table))

for(f in front) {

    test.front <- as.character(my.table[, f])

    tt.front <- str_sub(test.front, start=3)

    my.table[,f] <- tt.front

}

for(b in back) {

    test <- as.character(my.table[ ,b])

    tt.back <- as.numeric(str_match(test, "\-*\d{1,3}[\.]*[0-9]*"))

    my.table[, b] <- tt.back
}

str(my.table)

View(my.table)

# clear memory and quit R

rm(list=ls())

q()

n

Source: http://www.r-bloggers.com/scrape-web-data-using-r/

Tuesday 16 December 2014

Importance of Data Mining Services in Business

Data mining is used in re-establishment of hidden information of the data of the algorithms. It helps to extract the useful information starting from the data, which can be useful to make practical interpretations for the decision making.

It can be technically defined as automated extraction of hidden information of great databases for the predictive analysis. In other words, it is the retrieval of useful information from large masses of data, which is also presented in an analyzed form for specific decision-making. Although data mining is a relatively new term, the technology is not. It is thus also known as Knowledge discovery in databases since it grip searching for implied information in large databases.

It is primarily used today by companies with a strong customer focus - retail, financial, communication and marketing organizations. It is having lot of importance because of its huge applicability. It is being used increasingly in business applications for understanding and then predicting valuable data, like consumer buying actions and buying tendency, profiles of customers, industry analysis, etc. It is used in several applications like market research, consumer behavior, direct marketing, bioinformatics, genetics, text analysis, e-commerce, customer relationship management and financial services.

However, the use of some advanced technologies makes it a decision making tool as well. It is used in market research, industry research and for competitor analysis. It has applications in major industries like direct marketing, e-commerce, customer relationship management, scientific tests, genetics, financial services and utilities.

Data mining consists of major elements:

•    Extract and load operation data onto the data store system.
•    Store and manage the data in a multidimensional database system.
•    Provide data access to business analysts and information technology professionals.
•    Analyze the data by application software.
•    Present the data in a useful format, such as a graph or table.

The use of data mining in business makes the data more related in application. There are several kinds of data mining: text mining, web mining, relational databases, graphic data mining, audio mining and video mining, which are all used in business intelligence applications. Data mining software is used to analyze consumer data and trends in banking as well as many other industries.

Source:http://ezinearticles.com/?Importance-of-Data-Mining-Services-in-Business&id=2601221

Saturday 13 December 2014

Scrape it – Save it – Get it

I imagine I’m talking to a load of developers. Which is odd seeing as I’m not a developer. In fact, I decided to lose my coding virginity by riding the ScraperWiki digger! I’m a journalist interested in data as a beat so all I need to do is scrape. All my programming will be done on ScraperWiki, as such this is the only coding home I know. So if you’re new to ScraperWiki and want to make the site a scraping home-away-from-home, here are the basics for scraping, saving and downloading your data:

With these three simple steps you can take advantage of what ScraperWiki has to offer – writing, running and debugging code in an easy to use editor; collaborative coding with chat and user viewing functions; a dashboard with all your scrapers in one place; examples, cheat sheets and documentation; a huge range of libraries at your disposal; a datastore with API callback; and email alerts to let you know when your scrapers break.

So give it a go and let us know what you think!

Source:https://blog.scraperwiki.com/2011/04/scrape-it-save-it-get-it/

Thursday 4 December 2014

Multiple Listing Service Gets Favorable Appellate Ruling in Scraping Lawsuit

This is a follow-up to our massive post on anti-scraping lawsuits in the real estate industry from New Year’s Eve 2012 (Note: the portion on MRIS is about halfway through the post, labeled “Same Writ, Different Plaintiff”).

AHRN is a California real estate broker that owns and operates NeighborCity.com. The site gets its data in part by scraping from MLS databases–in this case, MRIS. As part of the scraping, however, AHRN had collected and displayed copyrighted photographs among the bits and pieces of general textual information about the properties. MRIS sent a cease and desist letter to AHRN, and filed suit alleging various copyright claims after the parties failed to agree on a license to use the photographs. Ultimately, a district court in Maryland granted a motion made by MRIS for a preliminary injunction.

When we last left off, the district court had revised its preliminary injunction order to enjoin only AHRN’s use of MRIS’s photographs–not the compilation itself or any textual elements that may be considered a part of it. Since then, AHRN appealed the injunction. On July 18th, the Fourth Circuit Court of Appeals affirmed.

Background

shutterstock_108008486.jpgAHRN argued that MRIS failed to show a likelihood of success on its copyright infringement claim because MRIS: (1) failed to register its copyright in the individual photographs when it registered the database, and (2) did not have a copyright interest in the photographs because the subscribers’ electronic agreement to MRIS’s terms of use failed to transfer those rights.

 MRIS Did Not Fail to Register Its Interest in the Photographs

This first question revolved around the scope of MRIS’s registrations. AHRN argued that MRIS’s collective work registrations did not cover the individual photographs because MRIS did not identify the names of the authors and titles of those works. MRIS argued that 17 U.S.C. §409 did not require any such identification when applied to collective works, and that its general description of the pre-existing photographs’ inclusion sufficed.

The court began its discussion by noting the “ambiguous” nature of §409’s language and its varying judicial interpretations. Some courts have barred infringement suits because the collective work registrant failed to list the authors, while others have allowed infringement suits where the registrant owns the rights to the component works as well as the collective work.

In this case, the court agreed with MRIS and found that the latter approach was more consistent with the relevant statutes and regulations:

    Adding impediments to automated database authors’ attempts to register their own component works conflicts with the general purpose of Section 409 to encourage prompt registration . . . and thwarts the specific goal embodied in Section 408 of easing the burden on group registrations[.]

As part of its decision, the court looked favorably upon the 3Taps case, in which Craigslist sued 3Taps and Padmapper for scraping and repackaging its online classified ads. In that case, the court reasoned that it would be “inefficient” to require registrants to list each author of an extremely large number of component works to which the registrant already had obtained an exclusive license.

Having found that MRIS’s general description satisfied § 409’s pre-suit registration requirement, the court moved on to the merits of MRIS’s infringement claim–more specifically, the question of whether MRIS’s Terms of Use actually transferred a copyright interest to its subscribers’ photographs.

E-SIGN Applies to Assignments of Copyrights and Overrides § 204

AHRN challenged MRIS’s ownership of the photographs by arguing that an MLS subscriber’s electronic agreement to MRIS’s Terms of Use does not operate as an assignment of rights under § 204, which requires a signed “writing.”

In a bad sign for AHRN, the court began its discussion by volunteering an argument that MRIS did not even bring up:

    [I]n situations where “the copyright [author] appears to have no dispute with its [assignee] on this matter, it would be anomalous to permit a third party infringer to invoke [Section 204(a)’s signed writing requirement] against the [assignee].”

With that in mind, the court went on to discuss the E-SIGN act’s impact on the conveyance of copyrights. After establishing the meaning of “e-signature,” the court focused on whether the act was limited from covering this type of situation.

    The Act provides that it “does not . . . limit, alter, or otherwise affect any requirement imposed by a statute, regulation, or rule of law . . . other than a requirement that contracts or other records be written, signed, or in nonelectric form[.]”

The court emphasized the phrase “other than,” reasoning that a plain reading of the E-SIGN language showed that Congress intended the provisions to limit § 204. It also noted that Congress did not list copyright assignments among the various agreements to which E-SIGN did not apply–nor was there a catchall that included such assignments.

The court then turned to the Hermosilla case, in which a district court in Florida upheld the validity of a copyright conveyance via e-mail. It emphasized the Hermosilla court’s reliance on the purpose of § 204–“to resolve disputes between copyright owners and transferees and to protect copyright holders from persons mistakenly or fraudulently claiming oral licenses or copyright ownership.” The appellate court agreed with the Hermosilla court that allowing assignment via e-mail actually helped cut down on these types of disputes.

    To invalidate copyright transfer agreements solely because they were made electronically would thwart the clear congressional intent embodied in the E-Sign Act.

All in all, the court basically said “we don’t see why E-SIGN shouldn’t apply.” Note that it did not pass judgment specifically on whether MRIS’s Terms of Use constituted a valid contract. It simply mentioned that AHRN waived that argument by not bringing it up sooner.

Source: http://blog.ericgoldman.org/archives/2013/07/multiple_listin_1.htm