Connect with us

Uncategorized

Europe’s Android ‘choice’ screen keeps burying better options

Published

on

It’s been over a year since Google begun auctioning slots for a search engine ‘choice’ screen on Android in Europe, following a major antitrust intervention by the European Commission back in 2018. But despite hitting Google with a record-breaking fine over two years ago almost nothing has changed.

The tech giant’s search marketshare remains undented and the most interesting regional search alternatives are being priced out of a Google-devised ‘remedy’ that favors those who can pay the most to be listed as an alternative to its own dominant search engine on smartphones running Google’s Android OS.

Quarterly choice screen winners have been getting increasingly same-y. Alternatives to Google are expecting another uninspiring batch of ‘winners’ to drop in short order.

The results for Q1 2021 were dominated by a bunch of ad-targeting search options few smartphone users would likely have heard of: Germany’s GMX; California-based info.com; and Puerto Rico-based PrivacyWall (which is owned by a company whose website is emblazoned with the slogan “100% programmatic advertising”) — plus another, more familiar (ad)tech giant’s search engine (Microsoft-owned) Bing.

Lower down the list: The Russian ‘Google’ — Yandex — which won eight slots. And a veteran player in the Czech search market, Seznam.cz, which bagged two.

On the ‘big loser’ side: Non-tracking search engine, DuckDuckGo — which has been standing up for online privacy for over a decade yet won only one slot (in Belgium). It’s been come to be almost entirely squeezed out vs winning a universal slot in all markets at the start of the auction process.

Tree-planting not-for-profit search engine, Ecosia, was almost entirely absent in the last round too: Gaining only one slot on the screen showed to Android users in Slovenia. Yet back in December Ecosia was added as a default search option with Safari on iOS, iPadOS and macOS — having grown its global usage to more than 15 million users.

While another homegrown European search option — which has a privacy-focus — France’s Qwant, went home with just one slot. And not in its home market, either (in tiny Luxembourg).

If Europe’s regulators had fondly imagined that a Google-devised ‘remedy’ for major antitrust breaches they identified would automagically restore thriving competition to the Android search market they should feel rudely awakened indeed. The bald fact is Google’s marketshare has not even been scratched, let alone dented.

Statista data for Google’s search market share on mobile (across both Android and iOS; the latter where the tech giant pays Apple billions of dollars annually to be set as the default on iPhones) shows that in February 2021 its share in Europe stood at 97.07% — up from 96.92% in July 2018 when the Commission made the antitrust ruling.

Yes, Google has actually gained share running this ‘remedy’.

By any measure that’s a spectacular failure for EU competition enforcement — more than 2.5 years after its headline grabbing antitrust decision against Android.

The Commission has also been promoting a goal of European tech sovereignty throughout the period Google has been running this auction. President Ursula von der Leyen links this overrarching goal to her digital policy programming.

On the measure of tech sovereignty the Android choice screen must be seen as a sizeable failure too — as it’s not only failed to support (most) homegrown alternatives to Google (another, Cliqz, pulled the plug on its search+browser effort entirely last year, putting part of the blame on the region’s political stakeholders for failing to understand the need for Europe to own its own digital infrastructure) — but it’s actively burying the most interesting European alternatives by forcing them to compete against a bunch of ad-funded Google clones.

(And if Brave Search takes off it’ll be another non-European alternative — albeit, one that will have benefitted from expertise and tech that was made-in-Europe… )

This is because the auction mechanism means only companies that pay Google the most can buy themselves a chance at being set as a default option on Android.

Even in the rare instances where European players shell out enough money to appear in the choice list (which likely means they’ll be losing money per search click) they most often do so alongside other non-European alternatives and Google — further raising the competitive bar for selection.

It doesn’t have to be this way. Nor was it wasn’t initially; Google started with a choice screen based on marketshare.

However it very quickly switched to a pay to play model — throttling at a stroke the discoverability of alternative business models that aren’t based on exploiting user data (or, indeed, aren’t profit-driven in Ecosia’s case; as it uses ad-generated revenue to fund tree planting with a purely environmental goal).

Such alternatives say they typically can’t afford to win Google’s choice screen auctions. (It’s worth noting that those who do participate in the game are restricted in what they can say as Google requires they sign an NDA.)

Clearly, it’s no coincidence that the winners of Google’s auction skew almost entirely to the track and target side of the tracks, where its own business sits; all data-exploiting business models bandied together. And then, from a consumer point of view, why would you not pick Google with such a poorly and artificially limited ‘choice’ on offer — since you’re generally only being offered weaker versions of the same thing?

Ecosia tells TechCrunch it’s now considering pulling out of the auction process altogether — which would be a return to its first instinct; which was to boycott the auction before saying it felt it had to participate. A few months playing Google’s pay-to-play ‘no choice’ (as Ecosia dubs the auction) game has cemented its view that the system is stacked against genuine alternatives.  

Over two auction rounds when Ecosia has only ended up winning the one slot each time it says it’s seen no positive effect on user numbers. A decision on whether or not to withdraw entirely will be taken after the results of the next auction process are revealed, it said. (The next round of results are expected shortly, in early March.)

“We definitely realized it’s less and less ‘fun’ to play the game,” Ecosia founder Christian Kroll told us. “It’s a super unfair game — where it’s not only ‘David against Goliath’ but also Goliath gets to choose the rules, gets a free ticket, he can change the rules of game if he likes to. So it’s not amusing for us to participate in that.

“We’ve been participating now for nine months and if you look at overall marketshare in Europe nothing has changed. We don’t know the results yet of this round but I assume also nothing will change — the usual suspects will be there again… Most of the options that you see there now are not interesting to users.”

“Calling it a ‘choice’ screen is still a little bit ironic if you remove all the interesting choices from the screen. So the situation is still the same and it becomes less and less fun to play the game and at some point I think we might make the decision that we’re not going to be part of the game anymore,” he added.

Other alternative search engines we spoke to are continuing to participate for now — but all were critical of Google’s ‘pay-to-play’ model for the Android ‘choice screen’.

DuckDuckGo founder, Gabriel Weinberg, told us: “We are bidding, but only to help further expose to the European Commission how flawed Google’s rigged process really is, in hopes they will help more actively take a role in reforming it into something that actually works for consumers. Due to our strict privacy policy, we expect to be eliminated, same as last time.”

He pointed to a blog post the company put out last fall, denouncing the “fundamentally flawed” auction model — and saying that “whole piece still stands”. In the blog post the company wrote that despite being profitable since 2014 “we have been priced out of this auction because we choose to not maximize our profits by exploiting our users”.

“In practical terms, this means our commitment to privacy and a cleaner search experience translates into less money per search. This means we must bid less relative to other, profit-maximizing companies,” DuckDuckGo went on, adding: “This EU antitrust remedy is only serving to further strengthen Google’s dominance in mobile search by boxing out alternative search engines that consumers want to use and, for those search engines that remain, taking most of their profits from the preference menu.”

“This auction format incentivizes bidders to bid what they can expect to profit per user selection. The long-term result is that the participating Google alternatives must give most of their preference menu profits to Google! Google’s auction further incentivizes search engines to be worse on privacy, to increase ads, and to not donate to good causes, because, if they do those things, then they could afford to bid higher,” it also said then.

France’s Qwant has been similarly critical and it told us it is “extremely dissatisfied” with the auction — calling for “urgent modification” and saying the 2018 Commissio decision should be fully respected “in text and in spirit”.

“We are extremely dissatisfied with the auction system. We are asking for an urgent modification of the Choice Screen to allow consumers to find the search engine they want to use and not just the three choices that are only the ones that pay the most Google. We demand full respect for the 2018 decision, in text and in spirit,” said CEO Jean-Claude Ghinozzi.

“We are reviewing all options and re-evaluating our decision on a quarterly basis. In any case, we want consumers to be able to freely choose the search engine they prefer, without being limited to the only three alternative choices sold by Google. Consumers’ interests must always come first,” he added.

Russia’s Yandex confirmed it has participated in the upcoming Q2 auction. But it was also critical of Google’s implementation, saying it falls short of offering a genuine “freedom of choice” to Android users.

“We aim to offer high-quality and convenient search engine around the world. We are confident that freedom to select a search engine will lead to greater market competition and motivate each player to improve services. We don’t think that the current EU solution fully ensures freedom of choice for users, by only covering devices released from March 2020,” a Yandex spokeswoman said.

“There are currently very few such devices on the EU market in comparison with the total number of devices in users’ hands. It is essential to provide the freedom of choice that is genuine and real. Competition among service providers ultimately benefits users who will receive a better product.”

One newcomer to the search space — the anti-tracking browser Brave (which, as we mentioned above, just bought up some Cliqz assets to underpin the forthcoming launch of an-own brand Brave Search) — confirmed it will not be joining in at all.

“Brave does not plan to participate in this auction. Brave is about putting the user first, and this bidding process ignores users’ best interests by limiting their choices and selecting only for highest Google Play Store optimizing bidders,” a spokeswoman said.

“An irony here is that Google gets to profit off its own remedy for being found guilty of anti-competitive tying of Chrome into Android,” she added.

Asked about its strategy to grow usage of Brave Search in the region — outside of participation in the Android choice screen — she said: “Brave already has localized browsers for the European market, and we will continue to grow by offering best-in-class privacy showcased in marketing campaigns and referrals programs.”

Google’s self-devised ‘remedy’ followed a 2018 antitrust decision by the Commission — which led to a record-breaking $5BN penalty and an order to cease a variety of infringing behaviors. The tech giant’s implementation remains under active monitoring by EU antitrust regulators. However Kroll argues the Commission is essentially just letting Google buy time rather than fix the abusive behavior it identified.

“The way I see this at the moment is the Commission feels like the auction screen isn’t necessarily something that they’ve requested as a remedy so they can’t really force Google to change it — and that’s why they also maybe don’t see it as their responsibility,” he said. “But at the same time they requested Google to solve the situation and Google isn’t doing anything.

“I think they are also allowing Google to get the credit from the press and also from users that it seems like Google is doing something — so they are allowing Google to play on time… I don’t know if a real choice screen would be a good solution but it’s also not for me to decide — it’s up to the European Commission to decide if Google has successfully remedied the damage… and has also compensated some of the damage that it’s done and I think that has not happened at all. We can see that in the [marketshare] numbers that basically still the same situation is happening.”

“The whole thing is designed to remove interesting options from the screen,” he also argued of Google’s current ‘remedy’. “This is how it’s ‘working’ and I’m of course disappointed that nobody is stepping in there. So we’re basically in this unfair game where we get beaten up by our competitors. And I would hope for some regulator to step in and say this is not how this should go. But this isn’t happening.

“At the moment our only choice is to hang in there but at the moment if we really see there is no effect and there’s also no chance that regulators will ever step in we still have the choice to completely withdraw and let Google have its fun but without us… We’re not only not getting anything out of the [current auction model] but we’re of course also investing into it. And there are also restrictions because of the NDA we’ve signed — and even those restrictions are a little bit of a pain. So we have all the negative effects and don’t get any benefits.”

While limited by NDA in what he can discuss about the costs involved with participating in the auction, Kroll suggested the winners are doing so at a loss — pursuing reach at the expense of revenue.

“If you look at the bids from the last rounds I think with those bids it would be difficult for us to make money — and so potentially others have lost money. And that’s exactly also how this auction is designed, or how most auctions are designed, is that the winners often lose money… so you have this winner’s curse where people overbid,” he said.

“This hasn’t happened to us — also because we’re super careful — and in the last round we won this wonderful slot in Slovenia. Which is a beautiful country but again it has no impact on our revenues and we didn’t expect that to happen. It’s just for us to basically participate in the game but not risk our financial health,” he added. “We know that our bids will likely not win so the financial risk [to Ecosia as it’s currently participating and mostly losing in the auction] is not that big but for the companies who actually win bids — for them it might be a different thing.”

Kroll points out that the auction model has allowed Google to continue harvesting marketshare while weakening its competitors.

“There are quite a few companies who can afford to lose money in search because they just need to build up marketshare — and Google is basically harvesting all that and at the same time weakening its competitors,” he argued. “Because competitors need to spend on this. And one element that — at least in the beginning when the auction started — that I didn’t even see was also that if you’re a real search company… then you’re building up a brand, you’re building up a product, you’re making all these investments and you have real users — and if you have those then, if there was really a choice screen, people would naturally choose you. But in this auction screen model you’re basically paying for users that you would have anyway.

“So it’s really putting those kind of companies at a disadvantage: DuckDuckGo, us, all kinds of companies who have a ‘real USP’. Also Lilo, potentially even Qwant as well if you have a more nationalist approach to search, basically. So all of those companies are put at an even bigger disadvantage. And that’s — I think — unfair.”

Since most winners of auction slots are, like Google, involved in surveillance capitalism — gathering data on search users to profit off of ad targeting — if anyone was banking on EU competition enforcement being able to act as a lever to crack open the dominant privacy-hostile business model of the web (and allow less abusive alternatives get a foot in the door) they must be sorely disappointed.

Better alternatives — that do not track consumers for ads; or, in the case of Ecosia, are on an entirely non-profit mission — are clearly being squeezed out.

The Commission can’t say it wasn’t warned: The moment the auction model was announced by Google rivals decried it as flawed, rigged, unfair and unsustainable — warning it would put them at a competitive disadvantage (exactly because they aren’t just cloning Google’s ‘track and target for ad profit model’).

Nonetheless, the Commission has so far shown itself unwilling or unable to respond — despite making a big show of proposing major new rules for the largest platforms which it says are needed to ensure they play fair. But that raises the question of why it’s not better-enforcing existing EU rules against tech giants like Google?

When we raised criticism of Google’s Android choice screen auction model with the Commission it sent us its standard set of talking points — writing that: “We have seen in the past that a choice screen can be an effective way to promote user choice”.

“The choice screen means that additional search providers are presented to users on start-up of every new Android device in every EEA country. So users can now choose their search provider of preference when setting up their newly purchased Android devices,” it also said, adding that it is “committed to a full and effective implementation of the decision”.

“We are therefore monitoring closely the implementation of the choice screen mechanism,” it added — a standard line since Google begin its ‘compliance’ with the 2018 EU decision. 

In a slight development, the Commission did also confirm it has had discussions with Google about the choice screen mechanism — following what it described as “relevant feedback from the market”. 

It said these discussions focused on “the presentation and mechanics of the choice screen and to the selection mechanism of rival search providers”.

But with the clock ticking, and genuine alternatives to Google search being actively squeezed out of the market — leaving European consumers to be offered no meaningful choice to privacy-hostile search on Android — you do have to wonder what regulators are waiting for?

A pattern of reluctance to challenge tech giants where it counts seems to be emerging from Margrethe Vestager’s tenure at the helm of the competition department (and also, since 2019, a key shaper of EU digital policy).

Despite gaining a reputation for being willing to take on tech giants — and hitting Google (and others) with a number of headline-grabbing fines over the past five+ years — she cannot claim success in rebalancing the market for mobile search nor smartphone operating systems nor search ad brokering, in just the most recent Google cases.

Nonetheless she was content to green light Google’s acquisition of wearable maker Fitbit at the end of last year — despite a multitude of voices raised against allowing the tech giant to further entrench its dominance.

On that she argued defensively that concessions secured from Google would be sufficient to address concerns (such as a promise extracted from Google not to  use Fitbit data for ads for at least ten years).

But, given her record on monitoring Google’s compliance with a whole flush of EU antitrust rulings, it’s hard to see why anyone other than Google should be confident in the Commission’s ability or willingness to enforce its own mandates against Google. Complaints against how Google operates, meanwhile, just keep stacking up.

“I think they are listening,” says Kroll of the Commission. “But what I am missing is action.”

 

Continue Reading
Comments

Uncategorized

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Published

on

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

Continue Reading

Uncategorized

Geoffrey Hinton has a hunch about what’s next for AI

Published

on

Back in November, the computer scientist and cognitive psychologist Geoffrey Hinton had a hunch. After a half-century’s worth of attempts—some wildly successful—he’d arrived at another promising insight into how the brain works and how to replicate its circuitry in a computer.

“It’s my current best bet about how things fit together,” Hinton says from his home office in Toronto, where he’s been sequestered during the pandemic. If his bet pays off, it might spark the next generation of artificial neural networks—mathematical computing systems, loosely inspired by the brain’s neurons and synapses, that are at the core of today’s artificial intelligence. His “honest motivation,” as he puts it, is curiosity. But the practical motivation—and, ideally, the consequence—is more reliable and more trustworthy AI.

A Google engineering fellow and cofounder of the Vector Institute for Artificial Intelligence, Hinton wrote up his hunch in fits and starts, and at the end of February announced via Twitter that he’d posted a 44-page paper on the arXiv preprint server. He began with a disclaimer: “This paper does not describe a working system,” he wrote. Rather, it presents an “imaginary system.” He named it, “GLOM.” The term derives from “agglomerate” and the expression “glom together.”

Hinton thinks of GLOM as a way to model human perception in a machine—it offers a new way to process and represent visual information in a neural network. On a technical level, the guts of it involve a glomming together of similar vectors. Vectors are fundamental to neural networks—a vector is an array of numbers that encodes information. The simplest example is the xyz coordinates of a point—three numbers that indicate where the point is in three-dimensional space. A six-dimensional vector contains three more pieces of information—maybe the red-green-blue values for the point’s color. In a neural net, vectors in hundreds or thousands of dimensions represent entire images or words. And dealing in yet higher dimensions, Hinton believes that what goes on in our brains involves “big vectors of neural activity.”

By way of analogy, Hinton likens his glomming together of similar vectors to the dynamic of an echo chamber—the amplification of similar beliefs. “An echo chamber is a complete disaster for politics and society, but for neural nets it’s a great thing,” Hinton says. The notion of echo chambers mapped onto neural networks he calls “islands of identical vectors,” or more colloquially, “islands of agreement”—when vectors agree about the nature of their information, they point in the same direction.

“If neural nets were more like people, at least they can go wrong the same ways as people do, and so we’ll get some insight into what might confuse them.”

Geoffrey Hinton

In spirit, GLOM also gets at the elusive goal of modelling intuition—Hinton thinks of intuition as crucial to perception. He defines intuition as our ability to effortlessly make analogies. From childhood through the course of our lives, we make sense of the world by using analogical reasoning, mapping similarities from one object or idea or concept to another—or, as Hinton puts it, one big vector to another. “Similarities of big vectors explain how neural networks do intuitive analogical reasoning,” he says. More broadly, intuition captures that ineffable way a human brain generates insight. Hinton himself works very intuitively—scientifically, he is guided by intuition and the tool of analogy making. And his theory of how the brain works is all about intuition. “I’m very consistent,” he says.

Hinton hopes GLOM might be one of several breakthroughs that he reckons are needed before AI is capable of truly nimble problem solving—the kind of human-like thinking that would allow a system to make sense of things never before encountered; to draw upon similarities from past experiences, play around with ideas, generalize, extrapolate, understand. “If neural nets were more like people,” he says, “at least they can go wrong the same ways as people do, and so we’ll get some insight into what might confuse them.”

For the time being, however, GLOM itself is only an intuition—it’s “vaporware,” says Hinton. And he acknowledges that as an acronym nicely matches, “Geoff’s Last Original Model.” It is, at the very least, his latest.

Outside the box

Hinton’s devotion to artificial neural networks (a mid-2oth century invention) dates to the early 1970s. By 1986 he’d made considerable progress: whereas initially nets comprised only a couple of neuron layers, input and output, Hinton and collaborators came up with a technique for a deeper, multilayered network. But it took 26 years before computing power and data capacity caught up and capitalized on the deep architecture.

In 2012, Hinton gained fame and wealth from a deep learning breakthrough. With two students, he implemented a multilayered neural network that was trained to recognize objects in massive image data sets. The neural net learned to iteratively improve at classifying and identifying various objects—for instance, a mite, a mushroom, a motor scooter, a Madagascar cat. And it performed with unexpectedly spectacular accuracy.

Deep learning set off the latest AI revolution, transforming computer vision and the field as a whole. Hinton believes deep learning should be almost all that’s needed to fully replicate human intelligence.

But despite rapid progress, there are still major challenges. Expose a neural net to an unfamiliar data set or a foreign environment, and it reveals itself to be brittle and inflexible. Self-driving cars and essay-writing language generators impress, but things can go awry. AI visual systems can be easily confused: a coffee mug recognized from the side would be an unknown from above if the system had not been trained on that view; and with the manipulation of a few pixels, a panda can be mistaken for an ostrich, or even a school bus.

GLOM addresses two of the most difficult problems for visual perception systems: understanding a whole scene in terms of objects and their natural parts; and recognizing objects when seen from a new viewpoint.(GLOM’s focus is on vision, but Hinton expects the idea could be applied to language as well.)

An object such as Hinton’s face, for instance, is made up of his lively if dog-tired eyes (too many people asking questions; too little sleep), his mouth and ears, and a prominent nose, all topped by a not-too-untidy tousle of mostly gray. And given his nose, he is easily recognized even on first sight in profile view.

Both of these factors—the part-whole relationship and the viewpoint—are, from Hinton’s perspective, crucial to how humans do vision. “If GLOM ever works,” he says, “it’s going to do perception in a way that’s much more human-like than current neural nets.”

Grouping parts into wholes, however, can be a hard problem for computers, since parts are sometimes ambiguous. A circle could be an eye, or a doughnut, or a wheel. As Hinton explains it, the first generation of AI vision systems tried to recognize objects by relying mostly on the geometry of the part-whole-relationship—the spatial orientation among the parts and between the parts and the whole. The second generation instead relied mostly on deep learning—letting the neural net train on large amounts of data. With GLOM, Hinton combines the best aspects of both approaches.

“There’s a certain intellectual humility that I like about it,” says Gary Marcus, founder and CEO of Robust.AI and a well-known critic of the heavy reliance on deep learning. Marcus admires Hinton’s willingness to challenge something that brought him fame, to admit it’s not quite working. “It’s brave,” he says. “And it’s a great corrective to say, ‘I’m trying to think outside the box.’”

The GLOM architecture

In crafting GLOM, Hinton tried to model some of the mental shortcuts—intuitive strategies, or heuristics—that people use in making sense of the world. “GLOM, and indeed much of Geoff’s work, is about looking at heuristics that people seem to have, building neural nets that could themselves have those heuristics, and then showing that the nets do better at vision as a result,” says Nick Frosst, a computer scientist at a language startup in Toronto who worked with Hinton at Google Brain.

With visual perception, one strategy is to parse parts of an object—such as different facial features—and thereby understand the whole. If you see a certain nose, you might recognize it as part of Hinton’s face; it’s a part-whole hierarchy. To build a better vision system, Hinton says, “I have a strong intuition that we need to use part-whole hierarchies.” Human brains understand this part-whole composition by creating what’s called a “parse tree”—a branching diagram demonstrating the hierarchical relationship between the whole, its parts and subparts. The face itself is at the top of the tree, and the component eyes, nose, ears, and mouth form the branches below.

One of Hinton’s main goals with GLOM is to replicate the parse tree in a neural net—this is would distinguish it from neural nets that came before. For technical reasons, it’s hard to do. “It’s difficult because each individual image would be parsed by a person into a unique parse tree, so we would want a neural net to do the same,” says Frosst. “It’s hard to get something with a static architecture—a neural net—to take on a new structure—a parse tree—for each new image it sees.” Hinton has made various attempts. GLOM is a major revision of his previous attempt in 2017, combined with other related advances in the field.

“I’m part of a nose!”

GLOM vector

Hinton face grid

MS TECH | EVIATAR BACH VIA WIKIMEDIA

A generalized way of thinking about the GLOM architecture is as follows: The image of interest (say, a photograph of Hinton’s face) is divided into a grid. Each region of the grid is a “location” on the image—one location might contain the iris of an eye, while another might contain the tip of his nose. For each location in the net there are about five layers, or levels. And level by level, the system makes a prediction, with a vector representing the content or information. At a level near the bottom, the vector representing the tip-of-the-nose location might predict: “I’m part of a nose!” And at the next level up, in building a more coherent representation of what it’s seeing, the vector might predict: “I’m part of a face at side-angle view!”

But then the question is, do neighboring vectors at the same level agree? When in agreement, vectors point in the same direction, toward the same conclusion: “Yes, we both belong to the same nose.” Or further up the parse tree. “Yes, we both belong to the same face.”

Seeking consensus about the nature of an object—about what precisely the object is, ultimately—GLOM’s vectors iteratively, location-by-location and layer-upon-layer, average with neighbouring vectors beside, as well as predicted vectors from levels above and below.

However, the net doesn’t “willy-nilly average” with just anything nearby, says Hinton. It averages selectively, with neighboring predictions that display similarities. “This is kind of well-known in America, this is called an echo chamber,” he says. “What you do is you only accept opinions from people who already agree with you; and then what happens is that you get an echo chamber where a whole bunch of people have exactly the same opinion. GLOM actually uses that in a constructive way.” The analogous phenomenon in Hinton’s system is those “islands of agreement.”

“Geoff is a highly unusual thinker…”

Sue Becker

“Imagine a bunch of people in a room, shouting slight variations of the same idea,” says Frosst—or imagine those people as vectors pointing in slight variations of the same direction. “They would, after a while, converge on the one idea, and they would all feel it stronger, because they had it confirmed by the other people around them.” That’s how GLOM’s vectors reinforce and amplify their collective predictions about an image.

GLOM uses these islands of agreeing vectors to accomplish the trick of representing a parse tree in a neural net. Whereas some recent neural nets use agreement among vectors for activation, GLOM uses agreement for representation—building up representations of things within the net. For instance, when several vectors agree that they all represent part of the nose, their small cluster of agreement collectively represents the nose in the net’s parse tree for the face. Another smallish cluster of agreeing vectors might represent the mouth in the parse tree; and the big cluster at the top of the tree would represent the emergent conclusion that the image as a whole is Hinton’s face. “The way the parse tree is represented here,” Hinton explains, “is that at the object level you have a big island; the parts of the object are smaller islands; the subparts are even smaller islands, and so on.”

Figure 2 from Hinton’s GLOM paper. The islands of identical vectors (arrows of the same color) at the various levels represent a parse tree.
GEOFFREY HINTON

According to Hinton’s long-time friend and collaborator Yoshua Bengio, a computer scientist at the University of Montreal, if GLOM manages to solve the engineering challenge of representing a parse tree in a neural net, it would be a feat—it would be important for making neural nets work properly. “Geoff has produced amazingly powerful intuitions many times in his career, many of which have proven right,” Bengio says. “Hence, I pay attention to them, especially when he feels as strongly about them as he does about GLOM.”

The strength of Hinton’s conviction is rooted not only in the echo chamber analogy, but also in mathematical and biological analogies that inspired and justified some of the design decisions in GLOM’s novel engineering.

“Geoff is a highly unusual thinker in that he is able to draw upon complex mathematical concepts and integrate them with biological constraints to develop theories,” says Sue Becker, a former student of Hinton’s, now a computational cognitive neuroscientist at McMaster University. “Researchers who are more narrowly focused on either the mathematical theory or the neurobiology are much less likely to solve the infinitely compelling puzzle of how both machines and humans might learn and think.”

Turning philosophy into engineering

So far, Hinton’s new idea has been well received, especially in some of the world’s greatest echo chambers. “On Twitter, I got a lot of likes,” he says. And a YouTube tutorial laid claim to the term “MeGLOMania.”

Hinton is the first to admit that at present GLOM is little more than philosophical musing (he spent a year as a philosophy undergrad before switching to experimental psychology). “If an idea sounds good in philosophy, it is good,” he says. “How would you ever have a philosophical idea that just sounds like rubbish, but actually turns out to be true? That wouldn’t pass as a philosophical idea.” Science, by comparison, is “full of things that sound like complete rubbish” but turn out to work remarkably well—for example, neural nets, he says.

GLOM is designed to sound philosophically plausible. But will it work?

Chris Williams, a professor of machine learning in the School of Informatics at the University of Edinburgh, expects that GLOM might well spawn great innovations. However, he says, “the thing that distinguishes AI from philosophy is that we can use computers to test such theories.” It’s possible that a flaw in the idea might be exposed—perhaps also repaired—by such experiments, he says. “At the moment I don’t think we have enough evidence to assess the real significance of the idea, although I believe it has a lot of promise.”

The GLOM test model inputs are ten ellipses that form a sheep or a face.
LAURA CULP

Some of Hinton’s colleagues at Google Research in Toronto are in the very early stages of investigating GLOM experimentally. Laura Culp, a software engineer who implements novel neural net architectures, is using a computer simulation to test whether GLOM can produce Hinton’s islands of agreement in understanding parts and wholes of an object, even when the input parts are ambiguous. In the experiments, the parts are 10 ellipses, ovals of varying sizes, that can be arranged to form either a face or a sheep.

With random inputs of one ellipse or another, the model should be able to make predictions, Culp says, and “deal with the uncertainty of whether or not the ellipse is part of a face or a sheep, and whether it is the leg of a sheep, or the head of a sheep.” Confronted with any perturbations, the model should be able to correct itself as well. A next step is establishing a baseline, indicating whether a standard deep-learning neural net would get befuddled by such a task. As yet, GLOM is highly supervised—Culp creates and labels the data, prompting and pressuring the model to find correct predictions and succeed over time. (The unsupervised version is named GLUM—“It’s a joke,” Hinton says.)

At this preliminary state, it’s too soon to draw any big conclusions. Culp is waiting for more numbers. Hinton is already impressed nonetheless. “A simple version of GLOM can look at 10 ellipses and see a face and a sheep based on the spatial relationships between the ellipses,” he says. “This is tricky, because an individual ellipse conveys nothing about which type of object it belongs to or which part of that object it is.”

And overall, Hinton is happy with the feedback. “I just wanted to put it out there for the community, so anybody who likes can try it out,” he says. “Or try some sub-combination of these ideas. And then that will turn philosophy into science.”

Continue Reading

Uncategorized

Pakistan temporarily blocks social media

Published

on

Pakistan has temporarily blocked several social media services in the South Asian nation, according to users and a government-issued notice reviewed by TechCrunch.

In an order titled “Complete Blocking of Social Media Platforms,” the Pakistani government ordered Pakistan Telecommunication Authority to block social media platforms including Twitter, Facebook, WhatsApp, YouTube, and Telegram from 11am to 3pm local time (06.00am to 10.00am GMT) Friday.

The move comes as Pakistan looks to crackdown against a violent terrorist group and prevent troublemakers from disrupting Friday prayers congregations following days of violent protests.

Earlier this week Pakistan banned the Islamist group Tehrik-i-Labaik Pakistan after arresting its leader, which prompted protests, according to local media reports.

An entrepreneur based in Pakistan told TechCrunch that even though the order is supposed to expire at 3pm local time, similar past moves by the government suggests that the disruption will likely last for longer.

Though Pakistan, like its neighbor India, has temporarily cut phone calls access in the nation in the past, this is the first time Islamabad has issued a blanket ban on social media in the country.

Pakistan has explored ways to assume more control over content on digital services operating in the country in recent years. Some activists said the country was taking extreme measures without much explanations.

Continue Reading

Trending