Saturday, 29 June 2013

Backtesting & Data Mining

In this article we'll take a look at two related practices that are widely used by traders called Backtesting and Data Mining. These are techniques that are powerful and valuable if we use them correctly, however traders often misuse them. Therefore, we'll also explore two common pitfalls of these techniques, known as the multiple hypothesis problem and overfitting and how to overcome these pitfalls.

Backtesting

Backtesting is just the process of using historical data to test the performance of some trading strategy. Backtesting generally starts with a strategy that we would like to test, for instance buying GBP/USD when it crosses above the 20-day moving average and selling when it crosses below that average. Now we could test that strategy by watching what the market does going forward, but that would take a long time. This is why we use historical data that is already available.

"But wait, wait!" I hear you say. "Couldn't you cheat or at least be biased because you already know what happened in the past?" That's definitely a concern, so a valid backtest will be one in which we aren't familiar with the historical data. We can accomplish this by choosing random time periods or by choosing many different time periods in which to conduct the test.

Now I can hear another group of you saying, "But all that historical data just sitting there waiting to be analyzed is tempting isn't it? Maybe there are profound secrets in that data just waiting for geeks like us to discover it. Would it be so wrong for us to examine that historical data first, to analyze it and see if we can find patterns hidden within it?" This argument is also valid, but it leads us into an area fraught with danger...the world of Data Mining

Data Mining

Data Mining involves searching through data in order to locate patterns and find possible correlations between variables. In the example above involving the 20-day moving average strategy, we just came up with that particular indicator out of the blue, but suppose we had no idea what type of strategy we wanted to test? That's when data mining comes in handy. We could search through our historical data on GBP/USD to see how the price behaved after it crossed many different moving averages. We could check price movements against many other types of indicators as well and see which ones correspond to large price movements.

The subject of data mining can be controversial because as I discussed above it seems a bit like cheating or "looking ahead" in the data. Is data mining a valid scientific technique? On the one hand the scientific method says that we're supposed to make a hypothesis first and then test it against our data, but on the other hand it seems appropriate to do some "exploration" of the data first in order to suggest a hypothesis. So which is right? We can look at the steps in the Scientific Method for a clue to the source of the confusion. The process in general looks like this:

Observation (data) >>> Hypothesis >>> Prediction >>> Experiment (data)

Notice that we can deal with data during both the Observation and Experiment stages. So both views are right. We must use data in order to create a sensible hypothesis, but we also test that hypothesis using data. The trick is simply to make sure that the two sets of data are not the same! We must never test our hypothesis using the same set of data that we used to suggest our hypothesis. In other words, if you use data mining in order to come up with strategy ideas, make sure you use a different set of data to backtest those ideas.

Now we'll turn our attention to the main pitfalls of using data mining and backtesting incorrectly. The general problem is known as "over-optimization" and I prefer to break that problem down into two distinct types. These are the multiple hypothesis problem and overfitting. In a sense they are opposite ways of making the same error. The multiple hypothesis problem involves choosing many simple hypotheses while overfitting involves the creation of one very complex hypothesis.

The Multiple Hypothesis Problem

To see how this problem arises, let's go back to our example where we backtested the 20-day moving average strategy. Let's suppose that we backtest the strategy against ten years of historical market data and lo and behold guess what? The results are not very encouraging. However, being rough and tumble traders as we are, we decide not to give up so easily. What about a ten day moving average? That might work out a little better, so let's backtest it! We run another backtest and we find that the results still aren't stellar, but they're a bit better than the 20-day results. We decide to explore a little and run similar tests with 5-day and 30-day moving averages. Finally it occurs to us that we could actually just test every single moving average up to some point and see how they all perform. So we test the 2-day, 3-day, 4-day, and so on, all the way up to the 50-day moving average.

Now certainly some of these averages will perform poorly and others will perform fairly well, but there will have to be one of them which is the absolute best. For instance we may find that the 32-day moving average turned out to be the best performer during this particular ten year period. Does this mean that there is something special about the 32-day average and that we should be confident that it will perform well in the future? Unfortunately many traders assume this to be the case, and they just stop their analysis at this point, thinking that they've discovered something profound. They have fallen into the "Multiple Hypothesis Problem" pitfall.

The problem is that there is nothing at all unusual or significant about the fact that some average turned out to be the best. After all, we tested almost fifty of them against the same data, so we'd expect to find a few good performers, just by chance. It doesn't mean there's anything special about the particular moving average that "won" in this case. The problem arises because we tested multiple hypotheses until we found one that worked, instead of choosing a single hypothesis and testing it.

Here's a good classic analogy. We could come up with a single hypothesis such as "Scott is great at flipping heads on a coin." From that, we could create a prediction that says, "If the hypothesis is true, Scott will be able to flip 10 heads in a row." Then we can perform a simple experiment to test that hypothesis. If I can flip 10 heads in a row it actually doesn't prove the hypothesis. However if I can't accomplish this feat it definitely disproves the hypothesis. As we do repeated experiments which fail to disprove the hypothesis, then our confidence in its truth grows.

That's the right way to do it. However, what if we had come up with 1,000 hypotheses instead of just the one about me being a good coin flipper? We could make the same hypothesis about 1,000 different people...me, Ed, Cindy, Bill, Sam, etc. Ok, now let's test our multiple hypotheses. We ask all 1000 people to flip a coin. There will probably be about 500 who flip heads. Everyone else can go home. Now we ask those 500 people to flip again, and this time about 250 will flip heads. On the third flip about 125 people flip heads, on the fourth about 63 people are left, and on the fifth flip there are about 32. These 32 people are all pretty amazing aren't they? They've all flipped five heads in a row! If we flip five more times and eliminate half the people each time on average, we will end up with 16, then 8, then 4, then 2 and finally one person left who has flipped ten heads in a row. It's Bill! Bill is a "fantabulous" flipper of coins! Or is he?

Well we really don't know, and that's the point. Bill may have won our contest out of pure chance, or he may very well be the best flipper of heads this side of the Andromeda galaxy. By the same token, we don't know if the 32-day moving average from our example above just performed well in our test by pure chance, or if there is really something special about it. But all we've done so far is to find a hypothesis, namely that the 32-day moving average strategy is profitable (or that Bill is a great coin flipper). We haven't actually tested that hypothesis yet.

So now that we understand that we haven't really discovered anything significant yet about the 32-day moving average or about Bill's ability to flip coins, the natural question to ask is what should we do next? As I mentioned above, many traders never realize that there is a next step required at all. Well, in the case of Bill you'd probably ask, "Aha, but can he flip ten heads in a row again?" In the case of the 32-day moving average, we'd want to test it again, but certainly not against the same data sample that we used to choose that hypothesis. We would choose another ten-year period and see if the strategy worked just as well. We could continue to do this experiment as many times as we wanted until our supply of new ten-year periods ran out. We refer to this as "out of sample testing", and it's the way to avoid this pitfall. There are various methods of such testing, one of which is "cross validation", but we won't get into that much detail here.

Overfitting

Overfitting is really a kind of reversal of the above problem. In the multiple hypothesis example above, we looked at many simple hypotheses and picked the one that performed best in the past. In overfitting we first look at the past and then construct a single complex hypothesis that fits well with what happened. For example if I look at the USD/JPY rate over the past 10 days, I might see that the daily closes did this:

up, up, down, up, up, up, down, down, down, up.

Got it? See the pattern? Yeah, neither do I actually. But if I wanted to use this data to suggest a hypothesis, I might come up with...

My amazing hypothesis:

If the closing price goes up twice in a row then down for one day, or if it goes down for three days in a row we should buy,

but if the closing price goes up three days in a row we should sell,

but if it goes up three days in a row and then down three days in a row we should buy.

Huh? Sounds like a whacky hypothesis right? But if we had used this strategy over the past 10 days, we would have been right on every single trade we made! The "overfitter" uses backtesting and data mining differently than the "multiple hypothesis makers" do. The "overfitter" doesn't come up with 400 different strategies to backtest. No way! The "overfitter" uses data mining tools to figure out just one strategy, no matter how complex, that would have had the best performance over the backtesting period. Will it work in the future?

Not likely, but we could always keep tweaking the model and testing the strategy in different samples (out of sample testing again) to see if our performance improves. When we stop getting performance improvements and the only thing that's rising is the complexity of our model, then we know we've crossed the line into overfitting.

Conclusion

So in summary, we've seen that data mining is a way to use our historical price data to suggest a workable trading strategy, but that we have to be aware of the pitfalls of the multiple hypothesis problem and overfitting. The way to make sure that we don't fall prey to these pitfalls is to backtest our strategy using a different dataset than the one we used during our data mining exploration. We commonly refer to this as "out of sample testing".


Source: http://ezinearticles.com/?Backtesting-and-Data-Mining&id=341468

Thursday, 27 June 2013

Data Mining vs Screen-Scraping

Data mining isn't screen-scraping. I know that some people in the room may disagree with that statement, but they're actually two almost completely different concepts.

In a nutshell, you might state it this way: screen-scraping allows you to get information, where data mining allows you to analyze information. That's a pretty big simplification, so I'll elaborate a bit.

The term "screen-scraping" comes from the old mainframe terminal days where people worked on computers with green and black screens containing only text. Screen-scraping was used to extract characters from the screens so that they could be analyzed. Fast-forwarding to the web world of today, screen-scraping now most commonly refers to extracting information from web sites. That is, computer programs can "crawl" or "spider" through web sites, pulling out data. People often do this to build things like comparison shopping engines, archive web pages, or simply download text to a spreadsheet so that it can be filtered and analyzed.

Data mining, on the other hand, is defined by Wikipedia as the "practice of automatically searching large stores of data for patterns." In other words, you already have the data, and you're now analyzing it to learn useful things about it. Data mining often involves lots of complex algorithms based on statistical methods. It has nothing to do with how you got the data in the first place. In data mining you only care about analyzing what's already there.

The difficulty is that people who don't know the term "screen-scraping" will try Googling for anything that resembles it. We include a number of these terms on our web site to help such folks; for example, we created pages entitled Text Data Mining, Automated Data Collection, Web Site Data Extraction, and even Web Site Ripper (I suppose "scraping" is sort of like "ripping"). So it presents a bit of a problem-we don't necessarily want to perpetuate a misconception (i.e., screen-scraping = data mining), but we also have to use terminology that people will actually use.


Source: http://ezinearticles.com/?Data-Mining-vs-Screen-Scraping&id=146813

Tuesday, 25 June 2013

An Easy Way For Data Extraction

There are so many data scraping tools are available in internet. With these tools you can you download large amount of data without any stress. From the past decade, the internet revolution has made the entire world as an information center. You can obtain any type of information from the internet. However, if you want any particular information on one task, you need search more websites. If you are interested in download all the information from the websites, you need to copy the information and pate in your documents. It seems a little bit hectic work for everyone. With these scraping tools, you can save your time, money and it reduces manual work.

The Web data extraction tool will extract the data from the HTML pages of the different websites and compares the data. Every day, there are so many websites are hosting in internet. It is not possible to see all the websites in a single day. With these data mining tool, you are able to view all the web pages in internet. If you are using a wide range of applications, these scraping tools are very much useful to you.

The data extraction software tool is used to compare the structured data in internet. There are so many search engines in internet will help you to find a website on a particular issue. The data in different sites is appears in different styles. This scraping expert will help you to compare the date in different site and structures the data for records.

And the web crawler software tool is used to index the web pages in the internet; it will move the data from internet to your hard disk. With this work, you can browse the internet much faster when connected. And the important use of this tool is if you are trying to download the data from internet in off peak hours. It will take a lot of time to download. However, with this tool you can download any data from internet at fast rate.There is another tool for business person is called email extractor. With this toll, you can easily target the customers email addresses. You can send advertisement for your product to the targeted customers at any time. This the best tool to find the database of the customers.

However, there are some more scraping tolls are available in internet. And also some of esteemed websites are providing the information about these tools. You download these tools by paying a nominal amount.


Source: http://ezinearticles.com/?An-Easy-Way-For-Data-Extraction&id=3517104

Monday, 24 June 2013

Usefulness of Web Scraping Services

For any business or organization, surveys and market research play important roles in the strategic decision-making process. Data extraction and web scraping techniques are important tools that find relevant data and information for your personal or business use. Many companies employ people to copy-paste data manually from the web pages. This process is very reliable but very costly as it results to time wastage and effort. This is so because the data collected is less compared to the resources spent and time taken to gather such data.

Nowadays, various data mining companies have developed effective web scraping techniques that can crawl over thousands of websites and their pages to harvest particular information. The information extracted is then stored into a CSV file, database, XML file, or any other source with the required format. After the data has been collected and stored, data mining process can be used to extract the hidden patterns and trends contained in the data. By understanding the correlations and patterns in the data; policies can be formulated and thereby aiding the decision-making process. The information can also be stored for future reference.

The following are some of the common examples of data extraction process:

• Scrap through a government portal in order to extract the names of the citizens who are reliable for a given survey.
• Scraping competitor websites for feature data and product pricing
• Using web scraping to download videos and images for stock photography site or for website design

Automated Data Collection
It is important to note that web scraping process allows a company to monitor the website data changes over a given time frame. It also collects the data on a routine basis regularly. Automated data collection techniques are quite important as they help companies to discover customer trends and market trends. By determining market trends, it is possible to understand the customer behavior and predict the likelihood of how the data will change.

The following are some of the examples of the automated data collection:

• Monitoring price information for the particular stocks on hourly basis
• Collecting mortgage rates from the various financial institutions on the daily basis
• Checking on weather reports on regular basis as required

By using web scraping services it is possible to extract any data that is related to your business. The data can then be downloaded into a spreadsheet or a database for it to be analyzed and compared. Storing the data in a database or in a required format makes it easier for interpretation and understanding of the correlations and for identification of the hidden patterns.

Through web scraping it is possible to get quicker and accurate results and thus saving many resources in terms of money and time. With data extraction services, it is possible to fetch information about pricing, mailing, database, profile data, and competitors data on a consistent basis. With the emergence of professional data mining companies outsourcing your services will greatly reduce your costs and at the same time you are assured of high quality services.


Source: http://ezinearticles.com/?Usefulness-of-Web-Scraping-Services&id=7181014

Friday, 21 June 2013

Data Mining Is Useful for Business Application and Market Research Services

One day of data mining is an important tool in a market for modern business and market research to transform data into an information system advantage. Most companies in India that offers a complete solution and services for these services. The extraction or to provide companies with important information for analysis and research.

These services are primarily today by companies because the firm body search of all trade associations, retail, financial or market, the institute and the government needs a large amount of information for their development of market research. This service allows you to receive all types of information when needed. With this method, you simply remove your name and information filter.

This service is of great importance, because their applications to help businesses understand that it can perform actions and consumer buying trends and industry analysis, etc. There are business applications use these services:
1) Research Services
2) consumption behavior
3) E-commerce
4) Direct marketing
5) financial services and
6) customer relationship management, etc.

Benefits of Data mining services in Business

• Understand the customer need for better decision
• Generate more business
• Target the Relevant Market.
• Risk free outsourcing experience
• Provide data access to business analysts
• Help to minimize risk and improve ROI.
• Improve profitability by detect unusual pattern in sales, claims, transactions
• Major decrease in Direct Marketing expenses

Understanding the customer's need for a better fit to generate more business target market.To provide risk-free outsourcing experience data access for business analysts to minimize risk and improve return on investment.

The use of these services in the area to help ensure that the data more relevant to business applications. The different types of text mining such as mining, web mining, relational databases, data mining, graphics, audio and video industry, which all used in enterprise applications.



Source: http://ezinearticles.com/?Data-Mining-Is-Useful-for-Business-Application-and-Market-Research-Services&id=5123878

Wednesday, 19 June 2013

The Increasing Significance of Data Entry Services


The instantaneous business environment has become extremely competitive in the new era of globalization. Huge business behemoths that had been benefited from monopolistic luxuries are now being challenged by newer participant in the marketplace, forcing recognized players to reorganize their plans and strategies. These are some of the major reasons that seemed to have forced businesses to opt for outsourcing services such as data entry services that allow them to focus on their core business processes. This in turn makes it simple for them to attain and maintain business competencies, a prerequisite for effectively overcoming the rising competitive challenges.

So, how exactly is data entry helping businesses in achieving their targeted goals and objectives? Well, to be able to know actually that, we will first have to delve deeper into the field of data entry and allied activities. To start with, it would be worth mentioning that every business, big and small, generates voluminous amounts of data and information that is important from a business point of view. This is exactly where the problems start to surface because accessing, analyzing and processing such voluminous amounts of data is too time consuming and obviously a task that can easily be classified as non-productive. And these are exactly the reasons for outsourcing such non-core work processes to third party outsourcing firms.

There is many data entry outsourcing firms and most of them are located in developing countries such as India. There are many reasons for such regional clustering, but the most prominent reason it seems is that India has a vast talent pool, comprising of educated, English-speaking professionals. The best part is that it is relatively less expensive to hire the services of these professionals. The same level of expertise will have been a lot more expensive to hire if it had been in a developed country. Subsequently, more and more businesses worldwide are outsourcing their non-core work processes.

As Globalization intensifies even more in the coming years, businesses will face even greater amounts of competitive pressures and it will just not be possible for them to even think about managing everything on their own, let alone actually going ahead and doing it. However, that should not be a problem, especially for businesses that opt for outsourcing services such as data entry and data conversion. By hiring such high-end and cost-effective services, these businesses will be able to realize the associated benefits that will come mostly as significant cost reductions, optimum accuracy, and increased efficiencies.

So for business executives that think outsourcing data entry related processes can help to achieve your targeted business goals and objectives, it's time you contacted an offshore outsourcing provider and request them precisely how they can ease your business. However just make sure that you opt for the most excellent available data entry services provider, perceptibly because it will be like sharing a part of your business.


Source: http://ezinearticles.com/?The-Increasing-Significance-of-Data-Entry-Services&id=1125870

Monday, 17 June 2013

Why Outsourcing Data Mining Services?


Are huge volumes of raw data waiting to be converted into information that you can use? Your organization's hunt for valuable information ends with valuable data mining, which can help to bring more accuracy and clarity in decision making process.

Nowadays world is information hungry and with Internet offering flexible communication, there is remarkable flow of data. It is significant to make the data available in a readily workable format where it can be of great help to your business. Then filtered data is of considerable use to the organization and efficient this services to increase profits, smooth work flow and ameliorating overall risks.

Data mining is a process that engages sorting through vast amounts of data and seeking out the pertinent information. Most of the instance data mining is conducted by professional, business organizations and financial analysts, although there are many growing fields that are finding the benefits of using in their business.

Data mining is helpful in every decision to make it quick and feasible. The information obtained by it is used for several applications for decision-making relating to direct marketing, e-commerce, customer relationship management, healthcare, scientific tests, telecommunications, financial services and utilities.

Data mining services include:

    Congregation data from websites into excel database
    Searching & collecting contact information from websites
    Using software to extract data from websites
    Extracting and summarizing stories from news sources
    Gathering information about competitors business

In this globalization era, handling your important data is becoming a headache for many business verticals. Then outsourcing is profitable option for your business. Since all projects are customized to suit the exact needs of the customer, huge savings in terms of time, money and infrastructure can be realized.

Advantages of Outsourcing Data Mining Services:

    Skilled and qualified technical staff who are proficient in English
    Improved technology scalability
    Advanced infrastructure resources
    Quick turnaround time
    Cost-effective prices
    Secure Network systems to ensure data safety
    Increased market coverage

Outsourcing will help you to focus on your core business operations and thus improve overall productivity. So data mining outsourcing is become wise choice for business. Outsourcing of this services helps businesses to manage their data effectively, which in turn enable them to achieve higher profits.


Source: http://ezinearticles.com/?Why-Outsourcing-Data-Mining-Services?&id=3066061

Friday, 14 June 2013

Is Web Scraping Relevant in Today's Business World?

Different techniques and processes have been created and developed over time to collect and analyze data. Web scraping is one of the processes that have hit the business market recently. It is a great process that offers businesses with vast amounts of data from different sources such as websites and databases.

It is good to clear the air and let people know that data scraping is legal process. The main reason is in this case is because the information or data is already available in the internet. It is important to know that it is not a process of stealing information but rather a process of collecting reliable information. Most people have regarded the technique as unsavory behavior. Their main basis of argument is that with time the process will be over flooded and therefore lead to parity in plagiarism.

We can therefore simply define web scraping as a process of collecting data from a wide variety of different websites and databases. The process can be achieved either manually or by the use of software. The rise of data mining companies has led to more use of the web extraction and web crawling process. Other main functions such companies are to process and analyze the data harvested. One of the important aspects about these companies is that they employ experts. The experts are aware of the viable keywords and also the kind of information which can create usable statistic and also the pages that are worth the effort. Therefore the role of data mining companies is not limited to mining of data but also help their clients be able to identify the various relationships and also build the models.

Some of the common methods of web scraping used include web crawling, text gripping, DOM parsing, and expression matching. The latter process can only be achieved through parsers, HTML pages or even semantic annotation. Therefore there are many different ways of scraping the data but most importantly they work towards the same goal. The main objective of using web scraping service is to retrieve and also compile data contained in databases and websites. This is a must process for a business to remain relevant in the business world.

The main questions asked about web scraping touch on relevance. Is the process relevant in the business world? The answer to this question is yes. The fact that it is employed by large companies in the world and has derived many rewards says it all. It is important to note that many people regarded this technology as a plagiarism tool and others consider it as a useful tool that harvests the data required for the business success.

Using of web scraping process to extract data from the internet for competition analysis is highly recommended. If this is the case, then you must be sure to spot any pattern or trend that can work in a given market.



Source: http://ezinearticles.com/?Is-Web-Scraping-Relevant-in-Todays-Business-World?&id=7091414

Wednesday, 12 June 2013

Amazon launches its own product lines

Amazon's next-generation Kindle reader is hogging most of the company's press this year, but even the new Kindle can't flip a fish. For that, you need famous Seattle chef Tom Douglas - or, in his absence, the just-released "Tom Douglas by Pinzon Stainless-Steel Slotted Fish Turner."

You can only buy it from Amazon. That's what "by Pinzon" means.

Pinzon is one of four private labels Seattle-based Amazon has been quietly developing over the last half-decade. The new Tom Douglas line, announced last week, is the most visible example of the online retail giant's selling its own brands right next to other companies' competing products on the Amazon Web site.

"Amazon's mission as a whole is to offer customers everything they want online," said Anya Waring, a company spokesperson. "Our reason for offering private-label brands is to offer our customers great value and quality as well as competitive prices."


Costco has its "Kirkland" house label, and other bricks-and-mortar companies such as Walmart and Whole Foods have company brands on the shelves next to the competition.

But Amazon has pioneered the practice among sizable Internet-only retailers. Its four house brands - Pinzon, Strathwood, Denali and Pike Street - are manufactured in 10 countries and offer thousands of products, from power screwdrivers to lawn chairs to bed linens.

This week's venture, though, is the first time Amazon has teamed with a celebrity to put a face on a hand-picked product line. Every one of the dozens of kitchen items that carry the Tom Douglas by Pinzon mark (his trademark is stamped on many of the tools) was chosen and used by the owner of restaurants The Palace, Dahlia Lounge, Etta's and Lola.

"Most of them have been in use in my own kitchen for at least a year," said Douglas, who in 2008 was named Bon Appetit magazine's Restaurateur of the Year.

"When we started this, Tom envisioned his collection as having the best quality and the best prices. In some cases we found that with major national brands, and we co-branded with companies that our customers know and trust," said Kerry Morris, Amazon's senior private label manager.

The full title of that fish turner, for instance, includes the name Dexter-Russell, a manufacturer of tools and cutlery since 1818. "We tried a lot of fish flippers from different suppliers," said Morris. "The bottom line is, this is the one Tom wanted to put his name on."

On Amazon's site, each item in the new line comes with tips from Tom. "These fabulous glasses compliment the wine," Douglas writes of white wine glasses, "and are much more durable than most 'fancy' glasses." Describing a Japanese knife, he warns against using it to scrape food off the cutting board. "Scraping dulls your knife quickly. Gather your prep with a board scraper."

The items are offered at reasonable prices - the six wine glasses are $40; the knife, a seven-inch Santoku, costs $25.

Douglas can go on at length about his collection and how it has been chosen to make people more comfortable with cooking. On a Japanese knife line, he likes the feel of the handles, and promises that Amazon will soon make it easy for customers to pick the best one to fit them personally. "It's not operational yet," Douglas says, "but there will be even be a way to pull up a screen and put your hand up next to it and say, 'I need the eight-inch knife.'"

From Douglas' standpoint, the partnership with Amazon gives him a way to find and market tools he likes and uses. "Amazon, by its size, gives me a way of dealing with manufacturers that I couldn't do on my own," he said.

What Amazon gets includes Douglas fans who might well be willing to purchase cooking gear that most people have never heard of. "Fish tweezers, for example," Douglas said. "I think it gives them the flexibility to sell products that they wouldn't otherwise."

Amazon's future almost certainly includes growing its private brands, although the company does not publicly disclose such details.

"We fill those pipelines with a selection of products because our customers have been looking for those types of products," said Morris, the senior manager. "When we see an opportunity, we will pursue it."

Outside analysts have said the private-label business will continue to expand.

As to whether other companies are unhappy with competing with Amazon products on Amazon's own site, Morris said she hasn't heard any complaints. "It's been a very open, welcoming opportunity that I've only heard positive feedback on," she said.

Or it may be that Amazon is so big that complaints would be useless. Scot Wingo, chief executive of ChannelAdvisor, a software firm that helps retailers sell online, told the Wall Street Journal he does not expect much backlash from other companies that sell through Amazon.

"You can't pull out of the Walmart of the online world," he said.



Source: http://www.seattlepi.com/local/article/Amazon-launches-its-own-product-lines-1304899.php

Tuesday, 11 June 2013

Making Money With Tumblr Blogs And Amazon Products

Tumblr is a popular blogging platform where users can create blogs about their own personal hobbies and pastimes or share their favorite photos or quotes. The main service offered by Tumblr that should be the interest of any internet marketer is the ability to like, reblog and follow other user’s blogs and blog posts.

The money making method that is going to be shared here is going to take advantage of this community orientated blogging, with the blog content having links to various products on the Amazon network hoping to generate income when users click-through.

You are going to be setting up a brand new blog on Tumblr with a specific niche subject that uses various photos as content to hopefully get likes, follows and reblogs of the content shared. The subject you choose for your Tumblr photo blog doesn’t really matter, but you should try and choose a subject that other user’s would be more than likely to share on their own blogs with your Amazon affiliate link embedded.

After choosing your niche and creating a Tumblr blog with a relevant title and theme, you will need to make sure you register as an affiliate at Amazon.com. Once the previous steps have been completed you are going to start the process of populating your Tumblr blog with niche related photos embedded with a link back to your affiliated Amazon products.

The best piece of software to help with this process is Tumbleforce which helps you scrape and post niche relevant photos embedded with Amazon affiliate links, and also find relevant Tumblr users who are interested in your niche and will more than likely reblog and like your posts, or actually follow the blog you have created.

Tumbleforce is the software that will be used in this money making method so it is highly recommended you purchase this software to help automate much of the processes involved in building a popular Tumblr blog. The first thing you need to do when opening the Tumbleforce software is to enter the relevant username and password for your Tumblr blog.


Then you will be using the Tumbleforce scraper to scrape images to post on your Tumblr blog, so where its says ‘Flickr’ choose ‘Pinterest’ from the drop down list and then select a relevant category to scrape photos from. Next set the pages to 10 so that the scraper will gather enough photos for you to use on your Tumblr blog and finally click ‘Scrape’ to start the process.

Once the Tumbleforce software has scraped enough photos you will now start the process of post the photos to your blog, so right click the list of photo urls scrape from Pinterest and select ‘Check all’ so that a tick appears in the box next to the URLs.

Now you will need to select an Amazon product to link to with your affiliate code embedded, make sure you are logged into Amazon.com and find a product that is relevant to your niche. Once you find a relevant product to promote then select ‘Link to this page’ on Amazon.com and copy your affiliate link to the clipboard.

Paste your Amazon affiliate link into the box titled ‘Source URL’ and also into the box titled ‘Clickthrough URL’ inside the Tumbleforce software. Put some relevant keywords in the box titled ‘Tags’ and enter #COMMENT into the box marked ‘Post’, and then finally right click the list of photo URLs that were scraped from Pinterest and select ‘Post’ then ‘Auto-select’.

The software will now start posting the photos with comments to your Tumblr blog all embedded with the affiliated link to your chosen Amazon product. The next thing you will need to do is follow some other Tumblr users in your niche to help gain exposure to your blog posts. and hopefully generate some likes and reblogs of your photo content.

In Tumbleforce choose ‘Tumblr Users’ from the drop-down list in the top right menu, enter in a relevant keyword for your niche, set the pages to 20 and the click ‘Scrape’ so the software starts gathering Tumblr users for you to follow. Once enough Tumblr users have been scraped for you to follow, just right click the list of URLs and select ‘Check all’ and right click the list again and select ‘Users’ then click on ‘Follow’.

The software will now follow other Tumblr users in your niche and hopefully you will gain some followers and more importantly reblogs, likes and hopefully a click-through to your Amazon affiliate products. That is basically what the money making method is about and all that is recommend to do now is, continue to add new photos in your niche, possibly choose other Amazon products to link to, follow other Tumblr users and unfollow users that are not following you back.

Just by doing this process on a regular basis within a few weeks you will be building a popular Tumblr blog, that will start to get reblogs and a clickthrough to your Amazon products. What is really great about this method is that even if the person is not interested in the Amazon product they clickthrough to, there is a good chance they will browse and look for other products that they may purchase and earn you a commission.


Source: http://blackhatsparrow.com/making-money-with-tumblr-blogs-and-amazon-products/

Saturday, 1 June 2013

Increasing Accessibility by Scraping Information From PDF

You may have heard about data scraping which is a method that is being used by computer programs in extracting data from an output that comes from another program. To put it simply, this is a process which involves the automatic sorting of information that can be found on different resources including the internet which is inside an html file, PDF or any other documents. In addition to that, there is the collection of pertinent information. These pieces of information will be contained into the databases or spreadsheets so that the users can retrieve them later.

Most of the websites today have text that can be accessed and written easily in the source code. However, there are now other businesses nowadays that choose to make use of Adobe PDF files or Portable Document Format. This is a type of file that can be viewed by simply using the free software known as the Adobe Acrobat. Almost any operating system supports the said software. There are many advantages when you choose to utilize PDF files. Among them is that the document that you have looks exactly the same even if you put it in another computer so that you can view it. Therefore, this makes it ideal for business documents or even specification sheets. Of course there are disadvantages as well. One of which is that the text that is contained in the file is converted into an image. In this case, it is often that you may have problems with this when it comes to the copying and pasting.

This is why there are some that start scraping information from PDF. This is often called PDF scraping in which this is the process that is just like data scraping only that you will be getting information that is contained in your PDF files. In order for you to begin scraping information from PDF, you must choose and exploit a tool that is specifically designed for this process. However, you will find that it is not easy to locate the right tool that will enable you to perform PDF scraping effectively. This is because most of the tools today have problems in obtaining exactly the same data that you want without personalizing them.

Nevertheless, if you search well enough, you will be able to encounter the program that you are looking for. There is no need for you to have programming language knowledge in order for you to use them. You can easily specify your own preferences and the software will do the rest of the work for you. There are also companies out there that you can contact and they will perform the task since they have the right tools that they can use. If you choose to do things manually, you will find that this is indeed tedious and complicated whereas if you compare this to having professionals do the job for you, they will be able to finish it in no time at all. Scraping information from PDF is a process where you collect the information that can be found on the internet and this does not infringe copyright laws.


Source: http://ezinearticles.com/?Increasing-Accessibility-by-Scraping-Information-From-PDF&id=4593863