Monday, March 26, 2007

Skrenta on beating Google

Rich Skrenta writes "How to beat Google, part 1". Some brief excerpts:
Grow a spine people! You have a giant growing market with just one dominant competitor, not even any real #2 ... Get a stick and try to knock G's crown off.

Here are my tips to get started:

A conventional attack against Google's search product will fail ... A copy of their product with your brand has no pull.

Forget interface innovation ... Interface features only get in the way.

Forget about asking users to do anything besides typing two words into a box.

Users do not click on clusters, or tags, or categories, or directory tabs, or pulldowns. Ever. Extra work from users is going the wrong way. You want to figure out how the user can do even less work.
Go read the whole thing. It's a good read.

Personalized search, by the way, requires no extra work from the user, works from just a couple words in a box, adds no interface goo, and could provide a substantially different experience than using Google.

Some of my previous posts -- such as "Perfect Search and the clickstream", "Search without searching", and "Search as a matching engine" -- discuss in some detail why personalized search might be a good path for those who seek to beat Google.

See also my posts, "Kill Google, Vol. 3" and "Kill Google, Vol. 2", which argue for attacking Google's lifeblood, their advertising revenue, instead of focusing on search.

4 comments:

Anonymous said...

Personalized search, by the way, requires no extra work from the user, works from just a couple words in a box, adds no interface goo, and could provide a substantially different experience than using Google.

Except that Google bought Kaltix in 2003, and have been developing personalized search for 4 years now, and have had the advantage of countless billions of search queries, results, and clickthroughs on which to test, tune, etc.

From our previous discussions on big data vs. smarter algorithms, the money is (it seems) on big data. So how is anyone going to be able to compete with Google, given that they've probably got the biggest data of them all?

Anonymous said...

Skrenta writes: Forget interface innovation. The editorial value of search is in the index, not the interface. That's why google's minimalist interface is so appealing. Interface features only get in the way.

I would like to know, though, on what basis he can conclude this. Maybe I'm an anomaly, but I use the query expansion suggestions from Yahoo and Ask all the time. It's no more work, to me, to click a query expansion term than it is to click one of SERP links.

I still really do not understand why it is supposedly so easy for the user to read and evaluate the top 4-5 items in the list, and difficult for them to read 4-5 single words and phrases. I don't understand how the former is less work, and the latter is more.

Greg Linden said...

Hi, Jeremy. On personalized search and Kaltix, I think that Google is pursing one method of doing personalized search -- biasing search results broadly based on long term behavior -- and not necessarily the only or best method.

I have more on Google's personalized search efforts and the potential issues with their particular approach in some of my older posts such as "A real personalized search from Google" and "Google Personalized Search and Bigtable".

On the interface innovation, I think Rich is referring to more radical interface innovation. You may have to ask him, but I suspect a good example might be Quintura, which certainly has a nifty interface, but it may mostly get in the way of getting work done.

Anonymous said...

Ok, so I guess I'm just a bit confused, because I see too many constraints from the Skrenta article.

Let me explain: I've heard the Google guys say that they only think search is 5% solved. I pretty much agree.

Yet with the ideas I heard from Skrenta, I see those only bumping search from 5% to maybe 7 or 8%. Your idea with personalization may bump it to 10 or 12%.

But how are we going to really make those big leaps? How are we going to get to 25% solved? 50%? 75%?

Let's imagine that 25% solution. Is that still going to look like a ranked list of 1.4 million documents, shown ten at a time, with results delivered in 0.4 seconds, and DMCA content omitted as per federal law? Is the 25% solution really going to look like that? How about the 50% solution?

Maybe it's just me, but I find it vaguely depressing to think that the way we will be interacting with information, when search is 50% solved, is by scanning down through a list of 1.4 million documents, ten documents at a time.

Maybe current solutions (Quintura, Vivisimo) are not the best way to do it.. maybe they're only 3% or 4% solutions at the moment. But maybe they're one step back, in order to take two steps forward.

I just think it is mistake to confuse sparseness with intelligence. There have got to be better ways to interact with information. So I strongly disagree that we should forget about interface innovation.

Anyway, I'm just beating my usual drum...