We calculate what’s interesting on the web in real-time. As new content is published across the web Gravity crawls and semantically analyzes each web page and tracks various performance metrics. Every page we crawl results in an Interest Graph for that page. How we choose to view the collection of Interest Graphs for a user and for all available content and how then we blend that with associated performance metrics is what enables us to provide consistently high quality and relevant content recommendations. Ultimately our Interest Graph technology allows us to deliver personalized recommendations to each user based on the topics they engage with most and what’s exciting right now.
The Interest Graph
Natural Language Processing
We crawl and index the entire internet to figure out what people are writing about. Doing this requires computers that know how to think like humans by reading sentences and picking them apart for their grammar: nouns, verbs, adjectives, etc. Your high school English teacher would be proud of this system. We also understand things like colloquialisms: red as a lobster usually means sunburn even though those words usually map to seafood and colors.
Algorithms: The Math Behind Recommendations
True Personalization: Context, Popularity, Virality & Recency
Some examples are: personalization, behavioral patterns (such as collaborative filtering), contextual relevancy, popularity (by page views, uniques, social counts such as Facebook Likes, Twitter tweets/retweets). All of these can be A/B tested to determine which algorithm performs best for your particular set of content.
Feedback Loops: Always Be Testing
Any content that is targeted for a particular user has a feedback loop back to Gravity to tell us if that recommendation performed well or not. This informs the system so that it knows when it did a good job and when it did not. It also learns from first-time users to solve the cold start problem: how do you recommend something to someone you know nothing about?