Connect with us


We tested a tool to confuse Google’s ad network. It works and you should use it.



We’ve all been there by now: surfing the web and bumping into ads with an uncanny flavor. How did they know I was thinking about joining a gym? Or changing careers? Or that I need a loan? You might wonder if Google can read your mind. Google even boasts that it knows you better than you know yourself.

Google can’t read your mind, of course. But it can read your search history. It tracks a lot of your web browsing, too. Google has an enormous amount of data about its users, and it uses that data to make an unimaginable amount of money from advertising: over $120 billion a year. The company runs a vast profiling machine, fitting people into categories that say who they are, what they’re worth, and how they’re expected to act. Google isn’t just organizing the world’s information; it’s sorting the world’s populations.

Many of the digital devices and platforms people use every day are built to make users transparent to the companies who want to predict, influence, and evaluate user behavior. This surveillance advertising has major social costs. Just for starters: it erodes privacy, perpetuates forms of discrimination, and siphons money away from the public-interest journalism that democracies need to survive. Lawmakers have not acted decisively to mitigate these costs.

Some activists, frustrated by the inability of regulators to effectively constrain Google’s actions, have taken matters into their own hands. Back in 2014, Daniel Howe, Mushon Zer-Aviv, and Helen Nissenbaum released a browser extension called AdNauseam that automatically clicks on web ads to interfere with behavioral tracking and profiling. Nissenbaum heads a research group at Cornell Tech, which I’m a part of.

AdNauseam is a tool of obfuscation. Obfuscation tactics are a sort of guerrilla warfare approach to the lack of privacy protections. Since it’s not possible to hide from Google’s surveillance, these tactics introduce inaccurate or excessive information to confuse and ultimately sabotage it.

This isn’t a new idea. As Nissenbaum wrote with Finn Brunton in a 2019 essay, “We are surrounded by examples of obfuscation that we do not yet think of under that name.” It can be something as simple as adding extra items to a shopping cart at the pharmacy to distract from something that might bring unwanted judgement. The Tor browser, which aggregates users’ web traffic so that no individual stands out, is perhaps one of the most successful examples of systematic obfuscation.

AdNauseam is like conventional ad-blocking software, but with an extra layer. Instead of just removing ads when the user browses a website, it also automatically clicks on them. By making it appear as if the user is interested in everything, AdNauseam makes it hard for observers to construct a profile of that person. It’s like jamming radar by flooding it with false signals. And it’s adjustable. Users can choose to trust privacy-respecting advertisers while jamming others. They can also choose whether to automatically click on all the ads on a given website or only some percentage of them.

We wanted to try to understand what’s going on inside the black box of Google’s incredibly lucrative advertising sales platforms in a way that nobody else outside the company had ever done.

Google, unsurprisingly, does not like AdNauseam. In 2017, it banned the extension from its Chrome Web Store. After Nissenbaum gave a lecture on AdNauseam in 2019 at the University of California, Berkeley, skeptics in the crowd, including Google employees, dismissed her effort. Google’s algorithms would, they said, easily detect and reject the illegitimate clicks—AdNauseam would be no match for Google’s sophisticated defenses.

Nissenbaum took this as a challenge. She began a research effort, which I later joined, to test whether AdNauseam works as designed. We would publish a website and buy ads on the same site on a “cost-per-click” basis—meaning the advertiser pays each time a user clicks on the ad—so we could see whether the clicks generated by AdNauseam were credited to the publisher and billed to the advertiser.

Our testing established that AdNauseam does indeed work, most of the time. But as the experiment developed, it became about more than settling this narrow question. We wanted to try to understand what’s going on inside the black box of Google’s incredibly lucrative advertising sales platforms in a way that nobody else outside the company had ever done.

The first step in the experiment involved setting up a website and an AdSense account. Google AdSense is a sales service for small publishers who don’t have the wherewithal to attract advertisers on their own. For a 32% commission, Google handles the whole process of monetizing a website’s traffic: it sells the ads, counts impressions and clicks, collects and makes payments, and keeps a lookout for fraud. If the skeptics at Nissenbaum’s talk were right, we reasoned, AdSense should smell something fishy with AdNauseam clicks and toss them back overboard.

Next, we created a campaign to advertise on the site using Google Ads, the service that buys inventory for advertisers. Google Ads is to advertisers what AdSense is to publishers. Small advertisers tell Google what sorts of people they’d like to reach and how much they’re willing to pay, and then Google finds those people as they browse a range of sites. In this case, the campaign was set up to run only on our site and to outbid any competing advertisers. We set it up this way because we wanted to be careful not to profit from it or draw unknowing bystanders into our experiment.

Positioned now on both sides of an advertising transaction, we were ready to observe the life cycle of an ad click from end to end. We invited individual volunteers to download AdNauseam and visit our site. Soon we had recorded a few dozen successful AdNauseam clicks—billed to our team’s advertiser account and credited to the publisher account. AdNauseam was working.

But this only proved that Google did not discard the very first click on an ad generated by a brand new AdNauseam user recruited specifically for the experiment. To silence the skeptics, we needed to test whether Google would learn to recognize suspicious clicking over time.

So we ran the experiment with peoplewho had already been using AdNauseam for some time. To anyone watching for very long, these users stick out like a sore thumb, because with AdNauseam’s default settings they appear to be clicking on 100% of the ads they see. Users can adjust the click rate, but even at 10%, they’d be way outside the norm; most people click display ads only a fraction of 1% of the time. This test, then, was designed to check if Google would disregard AdNauseam clicks from a browser with a long-standing record of astronomical click rates. If Google’s machine learning systems are so clever, they should have no trouble with that task.

ads clicked
An image of the AdNauseam “ad vault” collected by the automated Selenium browser.

We tested this in two ways.

First, with people: we recruited long-standing AdNauseam users to go to our website. We also invited new AdNauseam users to use the clicking software for a week in the course of their normal web browsing, in order to establish a history, and then to participate in the test.

Second, with software: we conducted an automated test using a software tool called Selenium, which simulates human browsing behavior. Using Selenium, we directed a browser equipped with AdNauseam to automatically surf the web, navigating across sites and pages, pausing, scrolling, and clicking ads along the way. Basically, this let us quickly build up a record of prolific clicking activity while tightly controlling variables that might be relevant to whether or not Google classifies as a click as “authentic.” We set up four of these automated browsers and ran them respectively for one, two, three, and seven days. At the end of each period, we sent the browsers to our experimental site to see whether AdSense accepted their clicks as legitimate. The Selenium browser that ran for seven days, for example, clicked on more than 900 Google ads, and almost 1,200 ads in all. If Google’s systems are indeed sensitive to suspicious clicking behavior, this should have set off alarm bells.

Most of our tests were successful. Google filtered out clicks on our site by the automated browser that ran for three days. But it did not filter out the vast majority of the other clicks, either by ordinary AdNauseam users or even in the higher-volume automated tests, where browsers were clicking upwards of 100 Google ads per day. In short, Google’s advanced defenses were not sensitive to the sort of clicking behavior typical of AdNauseam use.

Google’s advanced defenses were not sensitive to the sort of clicking behavior typical of AdNauseam use.

Soon we had $100 in our AdSense account, enough to trigger Google to mail us a check. We weren’t sure what to do with it. This money wasn’t ill-gotten, by any means. We were just getting back our own money that we had invested in the advertiser account—less the 32% cut banked by Google.We decided not to cash the check. It was enough to know we’d proved that—for now, at least—AdNauseam works. The check was like a certificate of success.

Nevertheless, our experiment can’t answer some other important questions. If you use AdNauseam, how do the clicks it makes affect the profile Google has built on you? Does AdNauseam successfully shield individuals, and the populations they may be sorted into, from being targeted for advertising? (After all, even if you use the extension, Google can still collect masses of data from your email, search history, and other sources.) Even answering our simple original question—whether the software works at all—required substantial effort. Answering those other questions would require insider access across many more nodes in online advertising.

In fact, we can’t even know conclusively why our test worked—why Google did not detect these AdNauseam clicks. Was it a failure of skill or a failure of will?

A failure of skill would mean that Google’s defenses against automated ad-clicking are less sophisticated than the company claims. However, as flattering as it would be to conclude that our small team outmaneuvered one of the most powerful companies in history, that seems farfetched.

A more likely explanation is a failure of will. Google makes money each time an ad is clicked. If advertisers found out they were being billed for phony clicks, that would of course undermine confidence in the online ad business. But advertisers can’t validate those suspicions unless they can look from both ends of the market, as we did. And even if they could, Google’s market dominance makes it hard for them to take their business elsewhere.

In a statement, Google spokeswoman Leslie Pitterson wrote, “We detect and filter the vast majority of this automated fake activity. Drawing conclusions from a small-scale experiment is not representative of Google’s advanced invalid traffic detection methods and the ongoing work of our dedicated technology, policy, and operations teams that work to combat ad fraud every day.” She added, “We invest heavily in detecting invalid traffic—including automated traffic from extensions such as AdNauseum [sic]—to protect users, advertisers, and publishers, as ad fraud hurts everyone in the ecosystem, including Google.”

AdNauseam might adapt to skirt Google’s counteroffensive, but an arms race will obviously favor Google.

If, contrary to Pitterson’s claims, the results of our experiment do hold up at scale, it may be bad news for advertisers, but it’s good news for internet users. It means that AdNauseam is one of the few tools ordinary people currently have at their disposal to guard against invasive profiling.

All the same, it is a temporary and imperfect defense. If Google finds a way—or the will—to neutralize AdNauseam, then whatever utility it has might be short-lived. AdNauseam might adapt to skirt Google’s counteroffensive, but an arms race will obviously favor Google.

Governments and regulators have generally failed to either craft or enforce rules preventing commercial surveillance. It’s true that some recent laws, like the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act, have somewhat limited companies’ abilities to sell or share personal data to third parties. However, these laws don’t constrain Google’s ability to be a first-party observer to lots of internet activity and many advertising transactions. In fact, Google may benefit from these privacy laws, since they limit the ability of rivals and customers to acquire the data it has gained. Google keeps watching, and advertisers get more dependent on what it knows.

AdNauseam doesn’t stop Google from doing this, but it does let individuals protest against these cycles of surveillance and behavioral targeting that have made much of the online world into a privacy nightmare. Obfuscation is an act of resistance that serves to undermine confidence in tracking and targeting, and to erode the value of data profiles, in the hope that advertisers and ad tech companies might begin to find it impractical and unprofitable to spy on people. Anyone who wants a less invasive online advertising business can give AdNauseam a try.

Another important benefit of using AdNauseam is that, to the extent it succeeds at obfuscation, it helps protect the privacy of everyone, not just the people using it. This is because personal information is not strictly personal; information about me can feed into inferences about people I associate with or people who share something in common with me. If you and I go to the same websites, marketers might use what they know about me to make a judgment about you, perhaps labeling you as valuable, risky, or likely to click on one ad or another. AdNauseam users, by disguising their own preferences, make it harder for Google to profile and evaluate other people in their orbits. And so the profiling and prediction engines of surveillance advertising become less reliable.

But, in some ways, the skeptics are right: a few programmers and researchers can’t go toe-to-toe with technological titans. Obfuscation is no substitute for an organized and energetic movement, backed by the force of law, to counteract the surveillance advertising that governs so much of the internet. Thankfully, some governments are filing antitrust suits against Google and Facebook, launching investigations into companies’ data practices, issuing fines for transgressions, and working on potentially stronger privacy protections. But for now, guerrilla tactics like AdNauseam are the weapons we’ve got.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading


Daily Crunch: Alphabet shuts down Loon



Alphabet pulls the plug on its internet balloon company, Apple is reportedly developing a new MacBook Air and Google threatens to pull out of Australia. This is your Daily Crunch for January 22, 2021.

The big story: Alphabet shuts down Loon

Alphabet announced that it’s shutting down Loon, the project that used balloons to bring high-speed internet to more remote parts of the world.

Loon started out under Alphabet’s experimental projects group X, before spinning out as a separate company in 2018. Despite some successful deployments, it seems that Loon was never able to find a sustainable business model.

“While we’ve found a number of willing partners along the way, we haven’t found a way to get the costs low enough to build a long-term, sustainable business,” Loon CEO Alastair Westgarth wrote in a blog post. “Developing radical new technology is inherently risky, but that doesn’t make breaking this news any easier.”

The tech giants

Apple reportedly planning thinner and lighter MacBook Air with MagSafe charging — The plan is reportedly to release the new MacBook Air as early as late 2021 or 2022.

Google threatens to close its search engine in Australia as it lobbies against digital news code — Google is dialing up its lobbying against draft legislation intended to force it to pay news publishers.

Cloudflare introduces free digital waiting rooms for any organizations distributing COVID-19 vaccines — The goal is to help health agencies and organizations tasked with rolling out COVID-19 vaccines to maintain a fair, equitable and transparent digital queue.

Startups, funding and venture capital

‘Slow dating’ app Once is acquired by Dating Group for $18M as it seeks to expand its portfolio — Once has 9 million users on its platform, with an additional 1 million users from a spin-out app called Pickable.

MotoRefi raises $10M to keep pedal on auto refinancing growth — CEO Kevin Bennett sees the opportunity to service Americans who collectively hold $1.2 trillion in auto loans.

Backed by Vint Cerf, Emortal wants to protect your digital legacy from ‘bit-rot’ —  Emortal is a startup that wants to help you organize, protect, preserve and pass on your “digital legacy” and protect it from becoming unreadable.

Advice and analysis from Extra Crunch

How VCs invested in Asia and Europe in 2020 — The unicorns are feasting.

End-to-end operators are the next generation of consumer business — VC firm Battery has tracked seismic shifts in how consumer purchasing behavior has changed over the years.

Drupal’s journey from dorm-room project to billion-dollar exit — Twenty years ago, Drupal and Acquia founder Dries Buytaert was a college student at the University of Antwerp.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

UK resumes privacy oversight of adtech, warns platform audits are coming — The U.K.’s data watchdog has restarted an investigation of adtech practices that, since 2018, have been subject to scores of complaints under GDPR.

Boston Globe will consider people’s requests to have articles about them anonymized — It’s reminiscent of the EU’s “right to be forgotten,” though potentially less controversial.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Continue Reading


The far right’s favorite registrar is building ‘censorship-resistant’ servers



“The digital divide is now a matter of life and death for people who are unable to access essential healthcare information,” said UN Secretary General António Guterres in June 2020. Almost half the global population currently has no internet access, and many who do cannot freely access all information sources. 

Freedom House, which tracks internet restrictions worldwide, says the coronavirus pandemic is accelerating a dramatic decline in global internet freedom. It found that governments in at least 28 countries censored websites and social media posts in 2020 to suppress unfavorable health statistics, corruption allegations and other COVID-19-related content.

Now, U.S. company Toki is building “school-in-a-box” devices to connect up to 1 billion people across Africa and Asia, using technologies that it claims could filter content to avoid some information sources and bypass local censorship. The devices will be Wi-Fi-ready servers that run on electric power or batteries and can handle dozens of concurrent users. If no networks are available, the servers will also come pre-installed with digital libraries curated to provide “locally relevant content.” 

One of Toki’s country managers describes on LinkedIn that the devices would also run a decentralized search engine, designed to be anonymous, private and censorship-resistant. They will be donated to communities in the developing world by a U.S. nonprofit* called eRise, which was founded in 2019 to, according to its website, “focus on digital empowerment initiatives that are capital-efficient, and which improve access to content, community and commerce.”

Both Toki and eRise were founded by entrepreneur and free speech advocate Rob Monster. Monster owns domain registration company Epik, which allowed controversial social network Parler to come briefly back online last week after the site was booted from Amazon’s cloud service. Parler is just one of several platforms enabled by Epik, and Monster’s other domain and web hosting companies, that have been home to far-right content. Parler is accused of hosting users that helped to coordinate the attack on the U.S. Capitol on January 6. 

The “school-in-a-box” would contain a memory card with educational content, games, books, maps and modules related to prayers, the story of religions and “the art of being grateful.” It says the device is intended for “parents who want their kids to be smarter and curious; schools who can’t afford a computer; [and] religious places who wish to spread awareness about education and empower the society.” 

But one researcher says this effort recalls Facebook’s heavily criticized project offering free connectivity in India, which spawned accusations of bias and self-censorship. 

“We’ve seen a similar tactic by Facebook, to provide digital access points that can also serve the purpose of delivering favorable content and ensuring that these groups become dependent on your benevolence,” said Dr. Joan Donovan, director of the Technology and Social Change Research Project at the Shorenstein Center. “It becomes that much harder later on to change the power dynamics when the ideology is in the infrastructure.”

Monster has used free speech arguments to defend Epik’s working with platforms that either welcome or tolerate extreme content. The Southern Poverty Law Center, which tracks hate groups, has been reported as saying that Monster “offers services to the most disreputable horrific people on the Internet.” 

Epik spokesperson Rob Davis told TechCrunch that Epik actively works with its clients to help them moderate content, and claimed that the company has deplatformed Nazi groups and deleted those promoting genocide.

“Lawful, responsible freedom of speech is an amazing right,” said Davis. “Every [domain registrar] has groups like this but Epik is often held to a higher standard.”

In a series of posts in 2019 on a forum dedicated to domain-name trading, Monster provided more details about the Toki technology. The servers would be powered by cheap Raspberry Pi processors and run a proprietary version of Linux that would enable file sharing, peer-to-peer commerce, a digital wallet and a personalized search engine, with the option of “ignoring certain data sources.” 

“Decentralization not only means decentralization of the narrative and talking points of big tech groups like Google, Twitter and Facebook,” said Epik’s Davis. “It also means anti-censorship by empowering people with things that they didn’t know.” The spokesperson gave the example of naturopathic remedies for minor health complaints. Naturopathic remedies have not been proven to be effective against COVID-19.

Eventually, each device might come pre-loaded with a “snapshot” of the internet, said Davis, although he did not describe how the internet might be reduced to fit on a single, small physical device. The eRise website notes that content would be curated by local digital librarians that it would recruit. Davis told TechCrunch that Toki has working models of its server, is already conducting field trials and hopes to start deploying the devices to 6,000 villages in Africa in 2022 or 2023, perhaps in collaboration with an unnamed Asian telecoms company. 

The Toki devices’ selectivity, if practical, could raise its own content and censorship concerns; for example, if eRise allowed extreme content similar to that seen on Epik’s clients like Gab and Parler, or ignored scientific advice on COVID-19 or other health issues. 

Donovan said she is wary of any one-box solution. “We have to focus on decoupling information companies from service providers,” she said. “That much control can be used for political gain. Technology is politics by other means.”

*Although eRise also claims on its website to be a 501(c)(3) nonprofit, which would exempt it from some taxes and allow tax-free donations, TechCrunch could not locate it on the IRS’s database of nonprofits. Monster later admitted eRise was not a registered 501(c)(3)).

Continue Reading


End-to-end operators are the next generation of consumer business



At Battery, a central part of our consumer investing practice involves tracking the evolution of where and how consumers find and purchase goods and services. From our annual Battery Marketplace Index, we’ve seen seismic shifts in how consumer purchasing behavior has changed over the years, starting with the move to the web and, more recently, to mobile and on-demand via smartphones.

The evolution looks like this in a nutshell: In the early days, listing sites like Craigslist, Angie’s List* and Yelp effectively put the Yellow Pages online — you could find a new restaurant or plumber on the web, but the process of contacting them was largely still offline. As consumers grew more comfortable with the web, marketplaces like eBay, Etsy, Expedia and Wayfair* emerged, enabling historically offline transactions to occur online.

More recently, and spurred in large part by mobile, on-demand use cases, managed marketplaces like Uber, DoorDash, Instacart and StockX* have taken online consumer purchasing a step further. They play a greater role in the operations of the marketplace, from automatically matching demand with supply, to verifying the supply side for quality, to dynamic pricing.

The key purpose of being end-to-end is to deliver an even better value proposition to consumers relative to incumbent alternatives.

Each stage of this evolution unlocked billions of dollars in value, and many of the names listed above remain the largest consumer internet companies today.

At their core, these companies are facilitators, matching consumer demand with existing supply of a product or service. While there is no doubt these companies play a hugely valuable role in our lives, we increasingly believe that simply facilitating a transaction or service isn’t enough. Particularly in industries where supply is scarce, or in old-guard industries where innovation in the underlying product or service is slow, a digitized marketplace — even when managed — can produce underwhelming experiences for consumers.

In these instances, starting from the ground up is what is really required to deliver an optimal consumer experience. Back in 2014, Chris Dixon wrote a bit about this phenomenon in his post on “Full stack startups.” Fast forward several years, and more startups than ever are “full stack” or as we call it, “end-to-end operators.”

These businesses are fundamentally reimagining their product experience by owning the entire value chain, from end to end, thereby creating a step-functionally better experience for consumers. Owning more in the stack of operations gives these companies better control over quality, customer service, delivery, pricing and more — which gives consumers a better, faster and cheaper experience.

It’s worth noting that these end-to-end models typically require more capital to reach scale, as greater upfront investment is necessary to get them off the ground than other, more narrowly focused marketplacesBut in our experience, the additional capital required is often outweighed by the value captured from owning the entire experience.

End-to-end operators span many verticals

Many of these businesses have reached meaningful scale across industries:

All of these companies have recognized they can deliver more value to consumers by “owning” every aspect of the underlying product or service — from the bike to the workout content in Peloton’s case, or the bank account to the credit card in Chime’s case. They have reinvented and reimagined the entire consumer experience, from end to end.

What does success for end-to-end operator businesses look like?

As investors, we’ve had the privilege of meeting with many of these next-generation end-to-end operators over the years and found that those with the greatest success tend to exhibit the five key elements below:

1. Going after very large markets

The end-to-end approach makes the most sense when disrupting very large markets. In the graphic above, notice that most of these companies play in the largest, but notoriously archaic industries like banking, insurance, real estate, healthcare, etc. Incumbents in these industries are very large and entrenched, but they are legacy players, making them slow to adopt new technology. For the most part, they have failed to meet the needs of our digital-native, mobile-savvy generation and their experiences lag behind consumer expectations of today (evidenced by low, or sometimes even negative, NPS scores). Rebuilding the experience from the ground up is sometimes the only way to satisfy today’s consumers in these massive markets.

2. Step-functionally better consumer experience versus the status quo

Continue Reading