Public Policy Blog
Updates on technology policy issues
Heroes of the open Internet
Friday, March 28, 2008
Posted by Andrew McLaughlin, Director of Global Public Policy
The fight to
keep the Internet free and open
is, at its heart, motivated by a keen vision of how the world ought to be -- interconnected by open communications networks on which free expression, creativity, community, culture, commerce, politics, innovation, and competition thrive. The movement behind that fight is fueled by a powerful awareness that the Internet has, to an astonishing extent, made that vision possible, yet today finds itself under threat from a complex matrix of business and political interests.
In recent weeks, there has been some
good news
for the open Internet movement. In response to a growing public outcry, some major wireline carriers around the world are taking
small but important steps
toward content-, service-, and protocol-neutral network management. Some major wireless carriers have announced moves toward opening their networks. The 700mhz auction
triggered
important open-device and open-application requirements for new nationwide mobile networks. The Federal Communications Commission has been showing genuine concern about the potential for abuse inherent in non-neutral carrier policies. And key
members of Congress
are calling for legislative action. Pretty impressive (though no one's counting any unhatched chickens, I can assure you).
There are many heroes who built the movement and got it to this point, and one of them just got some well-deserved recognition: The
Washington Post
is today
running a profile
of Free Press's Ben Scott. A tip of the hat to Ben and his team at Free Press. It's great to see a major newspaper getting into the details around the open Internet debate.
Google's privacy team comes to Washington
Friday, March 28, 2008
Posted by Jane Horvath, Senior Privacy Counsel
When I joined Google last fall, one of things that struck me was the diverse group of people who were thinking about privacy issues throughout the company -- not just lawyers like me, but also engineers, network security experts, and product developers. Given Google's national and global reach, it's rare to get the people who work in privacy at Google together in one room.
That's why I'm happy when events like this week's
International Association of Privacy Professional's Annual Privacy Summit
come around and give us an opportunity to get together. In addition to geeking out with other privacy specialists on topics like "notice and consent," we took advantage of this reunion of sorts in Washington D.C. this week to:
Participate in IAPP Summit events, including a panel on behavioral targeting in which Global Privacy Counsel Peter Fleischer described Google's contextual ad serving approach and suggested ways in which Google and other advertisers can provide even better privacy notice to users;
Hit the IAPP Exhibition Hall, where we shared our
Google Privacy Channel on YouTube
and our newly revamped, multimedia
Privacy Center
with fellow IAPP attendees;
Continue our ongoing dialogue with the FTC about the
Online Behavioral Targeting Privacy Principles
they've proposed, and underscore our support for establishing self-regulatory practices in the online advertising space that promote transparency, consumer choice, and security; and
Convene a group of privacy advocates, academics and experts for a lively roundtable discussion at Google's
new DC digs
on the challenges inherent in harnessing the benefits of data while protecting user privacy.
In the coming months we'll continue to engage with privacy stakeholders worldwide on the important issues we discussed in Washington this week.
Making search better in Catalonia, Estonia, and everywhere else
Tuesday, March 25, 2008
Posted by Paul Haahr and Steve Baker, Software Engineers, Search Quality
(Cross-posted from the
Official Google Blog
)
We recently began a series of posts on how we harness the power of data.
Earlier
we told you how data has been critical to the advancement of search; about using data to
make our products safe
and to
prevent fraud
; this post is the newest in the series. -Ed.
One of the most important uses of data at Google is building language models. By analyzing how people use language, we build models that enable us to interpret searches better, offer spelling corrections, understand when alternative forms of words are needed, offer
language
translation
, and even
suggest when searching in another language is appropriate
.
One place we use these models is to find alternatives for words used in searches. For example, for both English and French users, "GM" often means the company "General Motors," but our language model understands that in French searches like
seconde GM
, it means "Guerre Mondiale" (World War), whereas in
STI GM
it means "Génie Mécanique" (Mechanical Engineering). Another meaning in English is "genetically modified," which our language model understands in
GM corn
. We've learned this based on the documents we've seen on the web and by observing that users will use both "genetically modified" and "GM" in the same set of searches.
We use similar techniques in all languages. For example, if a Catalan user searches for
resultat elecció barris BCN
(searching for the result of a neighborhood election in Barcelona), Google will also find pages that use the words "resultats" or "eleccions" or that talk about "Barcelona" instead of "BCN." And our language models also tell us that the Estonian user looking for
Tartu juuksur
, a barber in Tartu, might also be interested in a "juuksurisalong," or "barber shop."
In the past, language models were built from dictionaries by hand. But such systems are incomplete and don't reflect how people actually use language. Because our language models are based on users' interactions with Google, they are more precise and comprehensive -- for example, they incorporate names, idioms, colloquial usage, and newly coined words not often found in dictionaries.
When building our models, we use billions of web documents and as much historical search data as we can, in order to have the most comprehensive understanding of language possible. We analyze how our users searched and how they revised their searches. By looking across the aggregated searches of many users, we can infer the relationships of words to each other.
Queries are not made in isolation -- analyzing a single search in the context of the searches before and after it helps us understand a searcher's intent and make inferences. Also, by analyzing how users modify their searches, we've learned related words, variant grammatical forms, spelling corrections, and the concepts behind users' information needs. (We're able to make these connections between searches using cookie IDs -- small pieces of data stored in visitors' browsers that allow us to distinguish different users. To understand how cookies work,
watch this video
.
)
To provide more relevant search results, Google is constantly developing new techniques for language modeling and building better models. One element in building better language models is
using more data
collected over longer periods of time. In languages with many documents and users, such as English, our language models allow us to improve results deep into the "long tail" of searches, learning about rare usages. However, for languages with fewer users and fewer documents on the web, building language models can be a challenge. For those languages we need to work with longer periods of data to build our models. For example, it takes more than a year of searches in Catalan to provide a comparable amount of data as a single day of searching in English; for Estonian, more than two and a half years worth of searching is needed to match a day of English. Having longer periods of data enables us to improve search for these less commonly used languages.
At Google, we want to ensure that we can help users everywhere find the things they're looking for; providing accurate, relevant results for searches in all languages worldwide is core to Google's mission. Building extensive models of historical usage in every language we can, especially when there are few users, is an essential piece of making search work for everyone, everywhere.
A common sense approach to Internet safety
Tuesday, March 25, 2008
Posted by Elliot Schrage, Vice President of Global Communications and Public Affairs
(Cross-posted from the
Official Google Blog
)
Over the years, we've built tools and offered resources to help kids and families stay safe online. Our
SafeSearch
feature, for example, helps filter explicit content from search results.
We've also been involved in a variety of local initiatives to educate families about how to stay safe while surfing the web. Here are a few highlights:
In the U.S., we've worked with
Common Sense Media
to promote awareness about online safety and have donated hardware and software to improve the ability of the
National Center for Missing and Exploited Children
to combat child exploitation.
Google UK has collaborated with child safety organizations such as
Beatbullying
and
Childnet
to raise awareness about cyberbullying and share prevention messages, and with law enforcement authorities, including the
Child Exploitation and Online Protection Centre
, to fight online exploitation.
Google India initiated "Be NetSmart," an Internet safety campaign created in cooperation with local law enforcement authorities that aims to educate students, parents, and teachers across the country about the great value the Internet can bring to their lives, while also teaching best practices for safe surfing.
Google France launched child safety education initiatives including
Tour de France des Collèges
and
Cherche Net
that are designed to teach kids how to use the Internet responsibly.
And Google Germany worked with the national government, industry representatives, and a number of local organizations recently to launch a
search engine for children
.
As part of these ongoing efforts to provide online safety resources for parents and kids, we've created
Tips for Online Safety
, a site designed to help families find quick links to safety tools like SafeSearch, as well as new resources, like a video offering online safety pointers that we've developed in partnership with Common Sense Media. In the
video
, Anne Zehren, president of Common Sense, offers easy-to-implement tips, like how to set privacy and sharing controls on social networking sites and the importance of having reasonable rules for Internet use at home with appropriate levels of supervision.
Users can also download our new
Online Family Safety Guide
(
PDF
), which includes useful Internet Safety pointers for parents, or check out a quick
tutorial
on SafeSearch created by one of our partner organizations, GetNetWise.
We all have roles to play in keeping kids safe online. Parents need to be involved with their kids' online lives and teach them how to make smart decisions. And Internet companies like Google need to continue to empower parents and kids with tools and resources that help put them in control of their online experiences and make web surfing safer.
The end of the FCC 700 MHz auction
Thursday, March 20, 2008
Posted by by Richard Whitt, Washington Telecom and Media Counsel, and Joseph Faber, Corporate Counsel
This afternoon the Federal Communications Commission
announced
the
results
of its 700 MHz spectrum auction. While the Commission's anti-collusion rules prevent us from saying much at this point, one thing is clear:
although Google didn't pick up any spectrum licenses, the auction produced a major victory for American consumers.
We congratulate the winners and look forward to a more open wireless world. As a result of the auction,
consumers whose devices use the C-block of spectrum soon will be able to use any wireless device they wish, and download to their devices any applications and content they wish.
Consumers soon should begin enjoying new, Internet-like freedom to get the most out of their mobile phones and other wireless devices.
We'll have more to say about the auction in the near future. Stay tuned.
Google maps springing into service
Thursday, March 20, 2008
Posted by Galen Panger, Associate, Global Communications and Public Affairs
On the heels of his previous Google Maps
tracing his trip to Iraq
and
showcasing Nebraska tourist destinations
, Nebraska Senator Ben Nelson is once again showing off his mad mashup skills with a new "
My Map
" highlighting students in his state who are spending spring break volunteering in the U.S. and abroad.
The nursing and physician assistant programs at
Union College
, for example, will be spending
twelve days in Nicaragua
providing village health care to a remote group of Miskito Indians found in the country's northeast corner. Students from
Creighton University
will be traveling to New Orleans to
help rebuild homes
there.
This is another great example of how maps can be an effective communications tool for politicians and other public officials -- both to communicate with citizens and, as in this case, to recognize them for the example they set for others. Maybe a Senate Maps Mashup Caucus isn't far behind?
Using data to help prevent fraud
Tuesday, March 18, 2008
Posted by Shuman Ghosemajumder, Business Product Manager for Trust & Safety
(Cross-posted from the
Official Google Blog
)
We recently began a series of posts on how we harness the power of data
.
Earlier
we told you
how data has been critical to the advancement of search technology
.
Then
we shared how we use log data to help make Google products safer for users. This post is the newest in the series. -Ed.
Protecting our advertisers against click fraud is a lot like solving a crime: the more clues we have, the better we can determine
which clicks to mark as invalid
, so advertisers are not charged for them.
As we've mentioned before, our
Ad Traffic Quality team
built, and is constantly adding to, our
three-stage system
for detecting invalid clicks. The three stages are: (1) proactive real-time filters, (2) proactive offline analysis, and (3) reactive investigations.
So how do we use logs information for click fraud detection? Our logs are where we get the clues for the detective work. Logs provide us with the repository of data which are used to detect patterns, anomalous behavior, and other signals indicative of click fraud.
Millions of users click on AdWords ads every day. Every single one of those clicks -- and the even more numerous impressions associated with them -- is analyzed by our filters (stage 1), which operate in real-time. This stage certainly utilizes our logs data, but it is stages 2 and 3 which rely even more heavily on deeper analysis of the data in our logs. For example, in stage 2, our team pores over the millions of impressions and clicks -- as well as conversions -- over a longer time period. In combing through all this information, our team is looking for unusual behavior in hundreds of different data points.
IP addresses
of computers clicking on ads are very useful data points. A simple use of IP addresses is determining the source location for traffic. That is, for a given publisher or advertiser, where are their clicks coming from? Are they all coming from one country or city? Is that normal for an ad of this type? Although we don't use this information to identify individuals, we look at these in aggregate and study patterns. This information is imperfect, but by analyzing a large volume of this data it is very helpful in helping to prevent fraud. For example, examining an IP address usually tells us which ISP that person is using. It is easy for people on most home Internet connections to get a new IP address by simply rebooting their DSL or cable modem. However, that new IP address will still be registered to their ISP, so additional ad clicks from that machine will still have something in common. Seeing an abnormally high number of clicks on a single publisher from the same ISP isn't necessarily proof of fraud, but it does look suspicious and raises a flag for us to investigate. Other information contained in our logs, such as the browser type and operating system of machines associated with ad clicks, are analyzed in similar ways.
These data points are just a few examples of hundreds of different factors we take into account in click fraud detection. Without this information, and enough of it to identify fraud attempted over a longer time period, it would be extremely difficult to detect invalid clicks with a high degree of confidence, and proactively create filters that help optimize advertiser ROI. Of course, we don't need this information forever; last year we started
anonymizing server logs
after 18 months. As always, our goal is to balance the utility of this information (as we try to improve Google’s services for you) with the best privacy practices for our users.
If you want to learn more about how we collect information to better detect click fraud, visit our
Ad Traffic Quality Resource Center
.
Checkout now for PACs
Monday, March 17, 2008
Posted by Shoaib Makani, Associate Product Marketing Manager
Earlier this year
we launched
Google Checkout for Political Contributions
, a fast and convenient way for supporters to contribute to political campaigns. We did this in part to help individuals engage more directly in the political process. With that in mind, we're now expanding the service to include
political action committees
, so users can support not just candidates running for national office, but also specific causes. If you're a PAC that has registered with the
Federal Election Commission
, head
here
to learn how to get started.
Using log data to help keep you safe
Thursday, March 13, 2008
Posted by Niels Provos, Google Security Team
(Cross-posted from
Official Google Blog
)
We recently began two new series of posts. The first, which explains how we harness data for our users, started with
this post
. The second, focusing on how we secure information and how users can protect themselves online,
began here
.
This post is the second installment in both series.- Ed.
We sometimes get questions on what Google does with server log data, which registers how users are interacting with our services. We take great care in protecting this data, and while we've talked previously about
some of the ways
it can be useful, something we haven't covered yet are the ways it can help us make Google products safer for our users.
While the Internet on the whole is a safe place, and most of us will never fall victim to an attack, there are more than a few threats out there, and
we do everything we can
to help you stay a step ahead of them.
Any information we can gather on how attacks are launched and propagated helps us do so.
That's where server log data comes in. We analyze logs for anomalies or other clues that might suggest malware or phishing attacks in our search results, attacks on our products and services, and other threats to our users. And because we have a reasonably significant data sample, with logs stretching back several months, we're able to perform aggregate, long-term analyses that can uncover new security threats, provide greater understanding of how previous threats impacted our users, and help us ensure that our threat detection and prevention measures are properly tuned.
We can't share too much detail (we need to be careful not to provide too many clues on what we look for), but we can use historical examples to give you a better idea of how this kind of data can be useful. One good example is the
Santy search worm
(PDF), which first appeared in late 2004. Santy used combinations of search terms on Google to identify and then infect vulnerable web servers. Once a web server was infected, it became part of a
botnet
and started searching Google for more vulnerable servers. Spreading in this way, Santy quickly infected thousands and thousands of web servers across the Internet.
As soon as Google recognized the attack, we began developing a series of tools to automatically generate "
regular expressions
" that could identify potential Santy queries and then block them from accessing Google.com or flag them for further attention. But because regular expressions like these can sometimes snag legitimate user queries too, we designed the tools so they'd test new expressions in our server log databases first, in order to determine how each one would affect actual user queries. If it turned out that a regular expression affected too many legitimate user queries, the tools would automatically adjust the expression, analyze its performance against the log data again, and then repeat the process as many times as necessary.
In this instance, having access to a good sample of log data meant we were able to refine one of our automated security processes, and the result was a more effective resolution of the problem. In other instances, the data has proven useful in minimizing certain security threats, or in preventing others completely. In the end, what this means is that whenever you use Google search, or Google Apps, or any of our other services, your interactions with those products helps us learn more about security threats that could impact your online experience. And the better the data we have, the more effectively we can protect all our users.
Policy and law in a changing world
Monday, March 10, 2008
Posted by Kent Walker, General Counsel, and Daphne Keller, Senior Product Counsel
This past Friday and Saturday we had the pleasure of co-hosting, together with Stanford Law School's Center for Internet and Society, the inaugural
Legal Futures conference
-- a mix of traditional, structured conference discussions, and unstructured,
Foo-style
panels.
We decided to support Legal Futures in order to facilitate a discussion of the new principles of law and policy needed in the wake of the Information Revolution.
As ever more people participate
online -- not just as consumers, but as creators, authors and artists -- policies from the previous era are increasingly difficult to apply. This shift is more than old wine in new bottles: it is a systematic change in the way people live, work and communicate that could result in tectonic shifts in law, business and public policy.
Legal Futures was a chance for a group of leading experts from academia, business, government and the non-profit sector to share their views on a diverse collection of topics: privacy, intellectual property, openness and interoperability, the rise of virtual worlds and the ideal balance of free expression and social responsibility. Because we're all about ambitious, long-term goals at Google, we were fascinated to hear a range of insightful projections of the future of the law in these areas, and we were pleased that a standing-room-only crowd came to join the conversation.
As always, everyone is smarter than anyone, and we learned a lot from the collective (but not always conventional) wisdom of those gathered. Our thanks go to our inspiring and eclectic group of participants for taking the time to meet and contribute to the dialog. Based on the success of this past weekend, we hope to make this an annual event, but in the meantime, we'll continue to welcome input from them and from all of you as we continue to struggle with these issues.
Comparative keyword ads OK in Utah
Friday, March 7, 2008
Posted by John Burchett, State Policy Counsel
Last year, we
told you
about a law passed by the Utah state legislature that essentially prohibited search engines like Google from allowing trademarks from being used as keywords to trigger ads. As we wrote at the time, this law ran counter to the precedent of federal trademark law, which has consistently upheld comparative advertising as being good for consumers, competition and free speech.
So if a department store like Macy's wanted to advertise that they sell Nike shoes, under the Utah law they would not have been able to use the term 'Nike' to trigger an ad for their store. Or if Avis wanted to announce a sale, they couldn't use the keyword "Hertz" to trigger ads for people searching for rental cars.
Although the Utah law had not yet been enforced, it represented a big potential problem for consumers and advertisers alike. Consumers would have been prevented from seeing the kind of comparative ads that help them get the best deal possible. And businesses (including small businesses) would have been prevented from advertising products that they sell. For example, if
Cole Sport
in Park City wanted to advertise that they were running a huge end of season sale on K2 skis, the Utah law would have prohibited them from doing so.
The law also would have hurt free speech, with citizens being unable to run ads in protest of a certain company's business practices, for example.
Fortunately, the Utah legislature
amended the bill this week
and removed the provisions of the law which prohibited this type of keyword advertising. We applaud in particular Utah
Sen. Dan Eastman
, who led the efforts to make sure Utah continued to allow competition to thrive online.
Why data matters
Tuesday, March 4, 2008
Posted by Hal Varian, Chief Economist
(Cross-posted from
Official Google Blog
)
We often use this space to discuss how we
treat user data and protect privacy
. With the post below, we're beginning an occasional series that discusses how we harness the data we collect to improve our products and services for our users. We think it's appropriate to start with a post describing how data has been critical to the advancement of search technology. - Ed.
Better data makes for better science. The history of information retrieval illustrates this principle well.
Work in this area began in the early days of computing, with simple document retrieval based on matching queries with words and phrases in text files. Driven by the availability of new data sources, algorithms evolved and became more sophisticated. The arrival of the web presented new challenges for search, and now it is common to use information from web links and many other indicators as signals of relevance.
Today's web search algorithms are trained to a large degree by the "wisdom of the crowds" drawn from the logs of billions of previous search queries. This brief overview of the history of search illustrates why using data is integral to making Google web search valuable to our users.
A brief history of search
Nowadays search is a hot topic, especially with the widespread use of the web, but the history of document search dates back to the 1950s. Search engines existed in those ancient times, but their primary use was to search a static collection of documents. In the early 60s, the research community gathered new data by digitizing abstracts of articles, enabling rapid progress in the field in the 60s and 70s. But by the late 80s, progress in this area had slowed down considerably.
In order to stimulate research in information retrieval, the National Institute of Standards and Technology (NIST) launched the
Text Retrieval Conference
(TREC) in 1992. TREC introduced new data in the form of full-text documents and used human judges to classify whether or not particular documents were relevant to a set of queries. They released a sample of this data to researchers, who used it to train and improve their systems to find the documents relevant to a new set of queries and compare their results to TREC's human judgments and other researchers' algorithms.
The TREC data revitalized research on information retrieval. Having a standard, widely available, and carefully constructed set of data laid the groundwork for further innovation in this field. The yearly TREC conference fostered collaboration, innovation, and a measured dose of competition (and bragging rights) that led to better information retrieval.
New ideas spread rapidly, and the algorithms improved. But with each new improvement, it became harder and harder to improve on last year's techniques, and progress eventually slowed down again.
And then came the web. In its beginning stages, researchers used industry-standard algorithms based on the TREC research to find documents on the web. But the need for better search was apparent--now not just for researchers, but also for everyday users---and the web gave us lots of new data in the form of links that offered the possibility of new advances.
There were developments on two fronts. On the commercial side, a few companies started offering web search engines, but no one was quite sure what business models would work.
On the academic side, the National Science Foundation started a "Digital Library Project" which made grants to several universities. Two Stanford grad students in computer science named Larry Page and Sergey Brin worked on this project. Their insight was to recognize that existing search algorithms could be dramatically improved by using the special linking structure of web documents. Thus
PageRank
was born.
How Google uses data
PageRank offered a significant improvement on existing algorithms by ranking the relevance of a web page not by keywords alone but also by the quality and quantity of the sites that linked to it. If I have six links pointing to me from sites such as the
Wall Street Journal
,
New York Times
, and the House of Representatives, that carries more weight than 20 links from my old college buddies who happen to have web pages.
Larry and Sergey initially tried to license their algorithm to some of the newly formed web search engines, but none were interested. Since they couldn't sell their algorithm, they decided to start a search engine themselves. The rest of the story is well-known.
Over the years, Google has continued to invest in making search better. Our information retrieval experts have added more than 200 additional signals to the algorithms that determine the relevance of websites to a user's query.
So where did those other 200 signals come from? What's the next stage of search, and what do we need to do to find even more relevant information online?
We're
constantly experimenting
with our algorithm, tuning and tweaking on a weekly basis to come up with more relevant and useful results for our users.
But in order to come up with new ranking techniques and evaluate if users find them useful, we have to store and analyze search logs. (Watch our
videos
to see exactly what data we store in our logs.) What results do people click on? How does their behavior change when we change aspects of our algorithm? Using data in the logs, we can compare how well we're doing now at finding useful information for you to how we did a year ago. If we don't keep a history, we have no good way to evaluate our progress and make improvements.
To choose a simple example: the Google spell checker is based on our analysis of user searches compiled from our logs -- not a dictionary. Similarly, we've had a lot of success in using query data to improve our information about geographic locations, enabling us to provide better local search.
Storing and analyzing logs of user searches is how Google's algorithm learns to give you more useful results. Just as data availability has driven progress of search in the past, the data in our search logs will certainly be a critical component of future breakthroughs.
Labels
Accessibility
5
Ad
2
Advertising
11
AdWords
2
Anti-defamation league
1
Book Search
16
Broadband
11
Business Issues
26
Buzz
1
buzzemail
1
Canada
1
Child Safety
18
Chrome
1
Cloud Computing
2
Competition
19
Congress
10
Constitute
1
copyright
7
Cuba
1
Cybersecurity
9
D.C. Talks
16
Digital Due Process
1
Digital Playbook
1
Economic Impact
5
Economy
13
ECPA
4
Elections
24
email
1
Energy Efficiency
29
Europe
2
FCC
7
fellowship
2
Fighting Human Trafficking
1
Free Expression
54
Geo
1
Gmail
1
GNI
2
Good to Know
5
Google Fellow
2
Google for Entrepreneurs
1
Google Ideas
2
Google Maps
1
Google Policy Fellowship
1
Google Tools
78
Government Transparency
33
Hate Speech
1
Health
5
How Google Fights Piracy
1
Human trafficking
1
Identity theft
1
Immigration
1
Intellectual Property
19
International
46
Journalists
1
Malware
1
Maps
1
National Consumer Protection Week
1
Net Neutrality
24
Patents
5
piracy. ad networks
2
Politicians at Google
11
Politics
23
Privacy
93
Public Policy
1
Public Policy Blog
806
Safe Browsing
3
scams
1
search
3
Security
17
Small Businesses
3
spectrum
4
State Issues
5
Surveillance
6
Technology for Good
1
Telecom
71
Trade
3
Transparency Report
4
White Spaces
23
WiFi Network
1
Workforce
5
Yahoo-Google Deal
5
YouTube
4
YouTube for Government
1
Archive
2016
Sep
Aug
Jul
Jun
May
Apr
Mar
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2007
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Follow @googlepubpolicy
Give us feedback in our
Product Forums
.