Connect with us

Uncategorized

EU’s top privacy regulator urges ban on surveillance-based ad targeting

Published

on

The European Union’s lead data protection supervisor has recommended that a ban on targeted advertising based on tracking Internet users’ digital activity be included in a major reform of digital services rules which aims to increase operators’ accountability, among other key goals.

The European Data Protection Supervisor (EDPS), Wojciech Wiewiorówski, made the call for a ban on surveillance-based targeted ads in reference to the Commission’s Digital Services Act (DSA) — following a request for consultation from EU lawmakers.

The DSA legislative proposal was introduced in December, alongside the Digital Markets Act (DMA) — kicking off the EU’s (often lengthy) co-legislative process which involves debate and negotiations in the European Parliament and Council on amendments before any final text can be agreed for approval. This means battle lines are being drawn to try to influence the final shape of the biggest overhaul to pan-EU digital rules for decades — with everything to play for.

The intervention by Europe’s lead data protection supervisor calling for a ban on targeted ads is a powerful pre-emptive push against attempts to water down legislative protections for consumer interests.

The Commission had not gone so far in its proposal — but big tech lobbyists are certainly pushing in the opposite direction so the EDPS taking a strong line here looks important.

In his opinion on the DSA the EDPS writes that “additional safeguards” are needed to supplement risk mitigation measures proposed by the Commission — arguing that “certain activities in the context of online platforms present increasing risks not only for the rights of individuals, but for society as a whole”.

Online advertising, recommender systems and content moderation are the areas the EDPS is particularly concerned about.

“Given the multitude of risks associated with online targeted advertising, the EDPS urges the co-legislators to consider additional rules going beyond transparency,” he goes on. “Such measures should include a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking, as well as restrictions in relation to the categories of data that can be processed for targeting purposes and the categories of data that may be disclosed to advertisers or third parties to enable or facilitate targeted advertising.”

It’s the latest regional salvo aimed at mass-surveillance-based targeted ads after the European Parliament called for tighter rules back in October — when it suggested EU lawmakers should consider a phased in ban.

Again, though, the EDPS is going a bit further here in actually calling for one. (Facebook’s Nick Clegg will be clutching his pearls.)

More recently, the CEO of European publishing giant Axel Springer, a long time co-conspirator of adtech interests, went public with a (rather protectionist-flavored) rant about US-based data-mining tech platforms turning citizens into “the marionettes of capitalist monopolies” — calling for EU lawmakers to extend regional privacy rules by prohibiting platforms from storing personal data and using it for commercial gain at all.

Apple CEO, Tim Cook, also took to the virtual stage of a (usually) Brussels based conference last month to urge Europe to double down on enforcement of its flagship General Data Protection Regulation (GDPR).

In the speech Cook warned that the adtech ‘data complex’ is fuelling a social catastrophe by driving the spread of disinformation as it works to profit off of mass manipulation. He went on to urge lawmakers on both sides of the pond to “send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated”. So it’s not just European companies (and institutions) calling for pro-privacy reform of adtech.

The iPhone maker is preparing to introduce stricter limits on tracking on its smartphones by making apps ask users for permission to track, instead of just grabbing their data — a move that’s naturally raised the hackles of the adtech sector, which relies on mass surveillance to power ‘relevant’ ads.

Hence the adtech industry has resorted to crying ‘antitrust‘ as a tactic to push competition regulators to block platform-level moves against its consentless surveillance. And on that front it’s notable than the EDPS’ opinion on the DMA, which proposes extra rules for intermediating platforms with the most market power, reiterates the vital links between competition, consumer protection and data protection law — saying these three are “inextricably linked policy areas in the context of the online platform economy”; and that there “should be a relationship of complementarity, not a relationship where one area replaces or enters into friction with another”.

Wiewiorówski also takes aim at recommender systems in his DSA opinion — saying these should not be based on profiling by default to ensure compliance with regional data protection rules (where privacy by design and default is supposed to be the legal default).

Here too be calls for additional measures to beef up the Commission’s legislative proposal — with the aim of “further promot[ing] transparency and user control”.

This is necessary because such system have “significant impact”, the EDPS argues.

The role of content recommendation engines in driving Internet users towards hateful and extremist points of view has long been a subject of public scrutiny. Back in 2017, for example, UK parliamentarians grilled a number of tech companies on the topic — raising concerns that AI-driven tools, engineered to maximize platform profit by increasing user engagement, risked automating radicalization, causing damage not just to the individuals who become hooked on hateful views the algorithms feeds them but cascading knock-on harms for all of us as societal cohesion is eaten away in the name of keeping the eyeballs busy.

Yet years on little information is available on how such algorithmic recommender systems work because the private companies that operate and profit off these AIs shield the workings as proprietary business secrets.

The Commission’s DSA proposal takes aim at this sort of secrecy as a bar to accountability — with its push for transparency obligations. The proposed obligations (in the initial draft) include requirements for platforms to provide “meaningful” criteria used to target ads; and explain the “main parameters” of their recommender algorithms; as well as requirements to foreground user controls (including at least one “nonprofiling” option).

However the EDPS wants regional lawmakers to go further in the service of protecting individuals from exploitation (and society as a whole from the toxic byproducts that flow from an industry based on harvesting personal data to manipulate people).

On content moderation, Wiewiorówski’s opinion stresses that this should “take place in accordance with the rule of law”. Though the Commission draft has favored leaving it with platforms to interpret the law.

“Given the already endemic monitoring of individuals’ behaviour, particularly in the context of online platforms, the DSA should delineate when efforts to combat ‘illegal content’ legitimise the use of automated means to detect, identify and address illegal content,” he writes, in what looks like a tacit recognition of recent CJEU jurisprudence in this area.

“Profiling for purposes of content moderation should be prohibited unless the provider can demonstrate that such measures are strictly necessary to address the systemic risks explicitly identified by the DSA,” he adds.

The EDPS has also suggested minimum interoperability requirements for very large platforms, and for those designated as ‘gatekeepers’ (under the DMA), and urges lawmakers to work to promote the development of technical standards to help with this at the European level.

On the DMA, he also urges amendments to ensure the proposal “complements the GDPR effectively”, as he puts it, calling for “increasing protection for the fundamental rights and freedoms of the persons concerned, and avoiding frictions with current data protection rules”.

Among the EDPS’ specific recommendations are: That the DMA makes it clear that gatekeeper platforms must provide users with easier and more accessible consent management; clarification to the scope of data portability envisaged in the draft; and rewording of a provision that requires gatekeepers to provide other businesses with access to aggregated user data — again with an eye on ensuring “full consistency with the GDPR”.

The opinion also raises the issue of the need for “effective anonymisation” — with the EDPS calling for “re-identification tests when sharing query, click and view data in relation to free and paid search generated by end users on online search engines of the gatekeeper”.

ePrivacy reform emerges from stasis

Wiewiorówski’s contributions to shaping incoming platform regulations come on the same day that the European Council has finally reached agreement on its negotiating position for a long-delayed EU reform effort around existing ePrivacy rules.

In a press release announcing the development, the Commission writes that Member States agreed on a negotiating mandate for revised rules on the protection of privacy and confidentiality in the use of electronic communications services.

“These updated ‘ePrivacy’ rules will define cases in which service providers are allowed to process electronic communications data or have access to data stored on end-users’ devices,” it writes, adding: “Today’s agreement allows the Portuguese presidency to start talks with the European Parliament on the final text.”

Reform of the ePrivacy directive has been stalled for years as conflicting interests locked horns — putting paid to the (prior) Commission’s hopes that the whole effort could be done and dusted in 2018. (The original ePrivacy reform proposal came out in January 2017; four years later the Council has finally settled on its arguing mandate.)

The fact that the GDPR was passed first appears to have upped the stakes for data-hungry ePrivacy lobbyists — in both the adtech and telco space (the latter having a keen interest in removing existing regulatory barriers on comms data in order that it can exploit the vast troves of user data which Internet giants running rival messaging and VoIP services have long been able to).

There’s a concerted effort to try to use ePrivacy to undo consumer protections baked into GDPR — including attempts to water down protections provided for sensitive personal data. So the stage is set for an ugly rights battle as negotiations kick off with the European Parliament.

Metadata and cookie consent rules are also bound up with ePrivacy so there’s all sorts of messy and contested issues on the table here.

Digital rights advocacy group Access Now summed up the ePrivacy development by slamming the Council for “hugely” missing the mark.

“The reform is supposed to strengthen privacy rights in the EU [but] States poked so many holes into the proposal that it now looks like French Gruyère,” said Estelle Massé, senior policy analyst at Access Now, in a statement. “The text adopted today is below par when compared to the Parliament’s text and previous versions of government positions. We lost forward-looking provisions for the protection of privacy while several surveillance measures have been added.”

The group said it will be pushing to restore requirements for service providers to protect online users’ privacy by default and for the establishment of clear rules against online tracking beyond cookies, among other policy preferences.

The Council, meanwhile, appears to be advocating for a highly dilute (and so probably useless) flavor of ‘do not track’ — by suggesting users should be able to give consent to the use of “certain types of cookies by whitelisting one or several providers in their browser settings”, per the Commission.

“Software providers will be encouraged to make it easy for users to set up and amend whitelists on their browsers and withdraw consent at any moment,” it adds in its press release.

Clearly the devil will be in the detail of the Council’s position there. (The European Parliament has, by contrast, previously clearly endorsed a “legally binding and enforceable” Do Not Track mechanism for ePrivacy so, again, the stage is set for clashes.)

Encryption is another likely bone of ePrivacy contention.

As security and privacy researcher, Dr Lukasz Olejnik, noted back in mid 2017, the parliament strongly backed end-to-end encryption as a means of protecting the confidentiality of comms data — saying then that Member States should not impose any obligations on service providers to weaken strong encryption.

So it’s notable that the Council does not have much to say about e2e encryption — at least in the PR version of its public position. (A line in this that runs: “As a main rule, electronic communications data will be confidential. Any interference, including listening to, monitoring and processing of data by anyone other than the end-user will be prohibited, except when permitted by the ePrivacy regulation” is hardly reassuring, either.)

It certainly looks like a worrying omission given recent efforts at the Council level to advocate for ‘lawful’ access to encrypted data. Digital and humans rights groups will be buckling up for a fight.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Eco raises $26M in a16z-led round to scale its digital cryptocurrency platform

Published

on

‍Eco, which has built out a digital global cryptocurrency platform, announced Friday that it has raised $26 million in a funding round led by a16z Crypto.

Founded in 2018, the SF-based startup’s platform is designed to be used as a payment tool around the world for daily-use transactions. The company emphasizes that it’s “not a bank, checking account, or credit card.”

“We’re building something better than all of those combined,” it said in a blog post. The company’s mission has also been described as an effort to use cryptocurrency as a way “to marry savings and spending,” according to this CoinList article.

Eco users can earn up to 5% annually on their deposits and get 5% cashback on when transacting with merchants such as Amazon, Uber, and others. Next up: the company says it will give its users the ability to pay bills, pay friends and more “all from the same, single wallet.” That same wallet, it says, rewards people every time they spend or save.

After a “successful” alpha test with millions of dollars deposited, the company’s Eco App is now available to the public.

A slew of other VC firms participated in Eco’s latest financing, including Founders Fund, Activant Capital, Slow Ventures, Coinbase Ventures, Tribe Capital, Valor Capital Group, and more than one hundred other funds and angels.  Expa and Pantera Capital co-led the company’s $8.5 million funding round.

CoinList co-founder Andy Bromberg stepped down from his role last fall to head up Eco. The startup was originally called Beam before rebranding to Eco “thanks to involvement by founding advisor, Garrett Camp, who held the Eco brand,” according to Coindesk. Camp is an Uber co-founder and Expa is his venture fund.

For a16z Crypto, leading the round is in line with its mission.

In a blog post co-written by Katie Haun and Arianna Simpson, the firm outlined why it’s pumped about Eco and its plans.

“One of the challenges in any new industry — crypto being no exception — is building things that are not just cool for the sake of cool, but that manage to reach and delight a broad set of users,” they wrote. “Technology is at its best when it’s improving the lives of people in tangible, concrete ways…At a16z Crypto, we are constantly on the lookout for paths to get cryptocurrency into the hands of the next billion people. How do we think that will happen? By helping them achieve what they already want to do: spend, save, and make money — and by focusing users on tangible benefits, not on the underlying technology.”

Eco is not the only crypto platform offering rewards to users. Lolli gives users free bitcoin or cash when they shop at over 1,000 top stores.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion.


Continue Reading

Uncategorized

Early-stage investor Mayfield shows how to scale up your biotech startup at TC Early Stage in April

Published

on

Founders in the earliest stages of startup life face a hefty learning curve. Just some of the core competencies you need to lock down include how to raise VC funding, recruiting the right people, finding product-market fit and building a killer go-to-market team. The list goes on and on…and on. You’ll learn about all those topics and more at TechCrunch Early Stage Operations & Fundraising taking place on April 1-2. 

Do you science? Are you inspired to use biology as technology? If your entrepreneurial interests lean toward the scientific side of the startup equation, you don’t want to miss this special session — brought to you by Mayfield — at TC Early Stage 2021 on April 1-2.

Scientist Entrepreneurs — Scaling Breakout Engineering Biology Companies 

Arvind Gupta and Ursheet Parikh, early-stage investors, company builders and Mayfield partners, along with Po Bronson, NYT bestselling author and managing director of IndieBio, will discuss scaling startups and touch upon three seminal areas that influence trajectory: fundraising, hiring and product design. Their insights draw on their experience with companies including ingredients-as-service leader Geltor (which raised a $91 million Series B in 2020); CRISPR platform Mammoth Biosciences (its dream team includes co-founder and Nobel Laureate Jennifer Doudna); and Endpoint Health (started by GeneWEAVE’s founding team and former YC Bio Partner Diego Rey).

Whether you’re a biotech entrepreneur, a researcher or a scientist tackling the daunting challenges of human and planetary health, this session will help you build a stronger, more successful startup as you take your product to market.

Mayfield will follow up this session with even more content at Disrupt 2021 in September. These sessions will reveal company-building insights from entrepreneurs, investors, industry leaders and policymakers. Mayfield invests in exceptional people whose mission in life is to create a better world — not just for our generation  but for future generations as well. If you science, don’t miss your opportunity to learn from leading investors who have partnered with iconic biotech and health IT entrepreneurs — from Amgen and Genentech to Mammoth Biosciences.

Get your ticket for the April TC Early Stage event here. Or get a dual-event ticket for the April and July events for double the knowledge across operations, marketing, recruiting and fundraising — and save up to $100.

Is your company interested in sponsoring or exhibiting at Early Stage 2021 — Operations & Fundraising? Contact our sponsorship sales team by filling out this form.


Continue Reading

Uncategorized

How to poison the data that Big Tech uses to surveil you

Published

on

Every day, your life leaves a trail of digital breadcrumbs that tech giants use to track you. You send an email, order some food, stream a show. They get back valuable packets of data to build up their understanding of your preferences. That data is fed into machine-learning algorithms to target you with ads and recommendations. Google cashes your data in for over $120 billion a year of ad revenue.

Increasingly, we can no longer opt out of this arrangement. In 2019 Kashmir Hill, then a reporter for Gizmodo, famously tried to cut five major tech giants out of her life. She spent six weeks being miserable, struggling to perform basic digital functions. The tech giants, meanwhile, didn’t even feel an itch.

Now researchers at Northwestern University are suggesting new ways to redress this power imbalance by treating our collective data as a bargaining chip. Tech giants may have fancy algorithms at their disposal, but they are meaningless without enough of the right data to train on.

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:

  • Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
  • Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
  • Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.

People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.

What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

There may have already been a few examples of this. In January, millions of users deleted their WhatsApp accounts and moved to competitors like Signal and Telegram after Facebook announced that it would begin sharing WhatsApp data with the rest of the company. The exodus caused Facebook to delay its policy changes.

Just this week, Google also announced that it would stop tracking individuals across the web and targeting ads at them. While it’s unclear whether this is a real change or just a rebranding, says Vincent, it’s possible that the increased use of tools like AdNauseam contributed to that decision by degrading the effectiveness of the company’s algorithms. (Of course, it’s ultimately hard to tell. “The only person who really knows how effectively a data leverage movement impacted a system is the tech company,” he says.)

Vincent and Li think these campaigns can complement strategies such as policy advocacy and worker organizing in the movement to resist Big Tech.

“It’s exciting to see this kind of work,” says Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics, who was not involved in the research. “It was really interesting to see them thinking about the collective or holistic view: we can mess with the well and make demands with that threat, because it is our data and it all goes into this well together.”

There is still work to be done to make these campaigns more widespread. Computer scientists could play an important role in making more tools like AdNauseam, for example, which would help lower the barrier to participating in such tactics. Policymakers could help too. Data strikes are most effective when bolstered by strong data privacy laws, such as the European Union’s General Data Protection Regulation (GDPR), which gives consumers the right to request the deletion of their data. Without such regulation, it’s harder to guarantee that a tech company will give you the option to scrub your digital records, even if you remove your account.

And some questions remain to be answered. How many people does a data strike need to damage a company’s algorithm? And what kind of data would be most effective in poisoning a particular system? In a simulation involving a movie recommendation algorithm, for example, the researchers found that if 30% of users went on strike, it could cut the system’s accuracy by 50%. But every machine-learning system is different, and companies constantly update them. The researchers hope that more people in the machine-learning community can run similar simulations of different companies’ systems and identify their vulnerabilities.

Alkhatib suggests that scholars should do more research on how to inspire collective data action as well. “Collective action is really hard,” he says. “Getting people to follow through on ongoing action is one challenge. And then there’s the challenge of how do you keep a group of people who are very transient—in this case it might be people who are using a search engine for five seconds—to see themselves as part of a community that actually has longevity?”

These tactics might also have downstream consequences that need careful examination, he adds. Could data poisoning end up just adding more work for content moderators and other people tasked with cleaning and labeling the companies’ training data?

But overall, Vincent, Li, and Alkhatib are optimistic that data leverage could turn into a persuasive tool to shape how tech giants treat our data and our privacy. “AI systems are dependent on data. It’s just a fact about how they work,” Vincent says. “Ultimately, that is a way the public can gain power.”

Continue Reading

Trending