Connect with us

Uncategorized

AI has cracked a key mathematical puzzle for understanding our world

Published

on

Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I’ve never used them since in the real world.

But partial differential equations, or PDEs, are also kind of magical. They’re a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.

The catch is PDEs are notoriously hard to solve. And here, the meaning of “solve” is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. “Solving” Navier-Stokes allows you to take a snapshot of the air’s motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It’s also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering.

Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That’s right. Bring it on.

Hammer time

Before we dive into how the researchers did this, let’s first appreciate the results. In the gif below, you can see an impressive demonstration. The first column shows two snapshots of a fluid’s motion; the second shows how the fluid continued to move in real life; and the third shows how the neural network predicted the fluid would move. It basically looks identical to the second.

The paper has gotten a lot of buzz on Twitter, and even a shout-out from rapper MC Hammer. Yes, really.

Okay, back to how they did it.

When the function fits

The first thing to understand here is that neural networks are fundamentally function approximators. (Say what?) When they’re training on a data set of paired inputs and outputs, they’re actually calculating the function, or series of math operations, that will transpose one into the other. Think about building a cat detector. You’re training the neural network by feeding it lots of images of cats and things that are not cats (the inputs) and labeling each group with a 1 or 0, respectively (the outputs). The neural network then looks for the best function that can convert each image of a cat into a 1 and each image of everything else into a 0. That’s how it can look at a new image and tell you whether or not it’s a cat. It’s using the function it found to calculate its answer—and if its training was good, it’ll get it right most of the time.

Conveniently, this function approximation process is what we need to solve a PDE. We’re ultimately trying to find a function that best describes, say, the motion of air particles over physical space and time.

Now here’s the crux of the paper. Neural networks are usually trained to approximate functions between inputs and outputs defined in Euclidean space, your classic graph with x, y, and z axes. But this time, the researchers decided to define the inputs and outputs in Fourier space, which is a special type of graph for plotting wave frequencies. The intuition that they drew upon from work in other fields, says Anima Anandkumar, a Caltech professor who oversaw the research, is that something like the motion of air can actually be described as a combination of wave frequencies. The general direction of the wind at a macro level is like a low frequency with very long, lethargic waves, while the little eddies that form at the micro level are like high frequencies with very short and rapid ones.

Why does this matter? Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job. Cue major accuracy and efficiency gains: in addition to its huge speed advantage over traditional methods, their technique achieves a 30% lower error rate when solving Navier-Stokes than previous deep-learning methods.

The whole thing is extremely clever, and also makes the method more generalizable. Previous deep-learning methods had to be trained separately for every type of fluid, whereas this one only needs to be trained once to handle all of them, as confirmed by the researchers’ experiments. Though they haven’t yet tried extending this to other examples, it should also be able to handle every earth composition when solving PDEs related to seismic activity, or every material type when solving PDEs related to thermal conductivity.

Super-simulation

Anandkumar and the lead author of the paper, Zongyi Li, a PhD student in her lab, didn’t do this research just for the theoretical fun of it. They want to bring AI to more scientific disciplines. It was through talking to various collaborators in climate science, seismology, and materials science that Anandkumar first decided to tackle the PDE challenge with her students. They’re now working to put their method into practice with other researchers at Caltech and the Lawrence Berkeley National Laboratory.

One research topic Anandkumar is particularly excited about: climate change. Navier-Stokes isn’t just good at modeling air turbulence; it’s also used to model weather patterns. “Having good, fine-grained weather predictions on a global scale is such a challenging problem,” she says, “and even on the biggest supercomputers, we can’t do it at a global scale today. So if we can use these methods to speed up the entire pipeline, that would be tremendously impactful.”

There are also many, many more applications, she adds. “In that sense, the sky’s the limit, since we have a general way to speed up all these applications.”

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Facebook’s latest ad tool fail puts another dent in its reputation

Published

on

Reset yer counters: Facebook has had to ‘fess up to yet another major ad reporting fail.

This one looks like it could be costly for the tech giant to put right — not least because it’s another dent in its reputation for self reporting. (For past Facebook ad metric errors check out our reports from 2016 here, here, here and here.)

AdExchanger reported on the code error last week with Facebook’s free ‘conversion lift’ tool which it said affected several thousand advertisers.

The discovery of the flaw has since led the tech giant to offer some advertisers millions of dollars in credits, per reports this week, to compensate for miscalculating the number of sales derived from ad impressions (which is, in turn, likely to have influenced how much advertisers spent on its digital snake oil).

According to an AdAge report yesterday, which quotes industry sources, the level of compensation Facebook is offering varies depending on the advertiser’s spend — but in some instances the mistake means advertisers are being given coupons worth tens of millions of dollars.

The issue with the tool went unfixed for as long as 12 months, with the problem persisting between August 2019 and August 2020, according to reports.

The Wall Street Journal says Facebook quietly told advertisers this month about the technical problem with its calculation of the efficacy of their ad campaigns, skewing data advertisers use to determine how much to spend on its platform.

One digital agency source told the WSJ the issue particularly affects certain categories such as retail where marketers have this year increased spending on Facebook and similar channels by up to 5% or 10% to try to recover business lost during the early stages of the pandemic.

Another of its industry sources pointed out the issue affects not just media advertisers but the tech giant’s competitors — since the tool could influence where marketers chose to spend budget, so whether they spend on Facebook’s platform or elsewhere.

Last week the tech giant told AdExchanger that the bug was fixed on September 1, saying then that it was “working with impacted advertisers”.

In a subsequent statement a company spokesperson told us: “While making improvements to our measurement products, we found a technical issue that impacted some conversion lift tests. We’ve fixed this and are working with advertisers that have impacted studies.”

Facebook did not respond to a request to confirm whether some impacted advertisers are being offered millions of dollars worth of ad vouchers to rectify its code error.

It did confirm it’s offering one-time credits to advertisers who have been ‘meaningfully’ impacted by the issue with the (non-billable) metric, adding that the impact is on a case by case basis, depending on how the tool was used.

Nor did it confirm how many advertisers had impacted studies as a result of the year long technical glitch — claiming it’s a small number.

While the tech giant can continue to run its own reporting systems for b2b customers free from external oversight for now, regulating the fairness and transparency of powerful Internet platforms which other businesses depend upon for market access and reach is a key aim of a major forthcoming digital services legislative overhaul in the European Union.

Under the Digital Services Act and Digital Markets Act plan, the European Commission has said tech giants will be required to open up their algorithms to public oversight bodies — and will also be subject to binding transparency rules. So the clock may be ticking for Facebook’s self-serving self-reporting.

Continue Reading

Uncategorized

Thanksgiving on track for a record $6B in US online sales, says Adobe

Published

on

As people prepare and eat their Thanksgiving meals, or just “work” on relaxing for the day, some consumers are going online to get a jump on holiday shopping deals. Adobe, which is following online sales in real time at 80 of the top 100 retailers in the US, covering some 100 million SKUs, says that initial figures indicate that we are on track to break $6 billion in e-commerce sales for Thanksgiving Day. Overall, it believes consumers will spend $189.1 billion shopping online this year.

To put that figure into some context, the overall holiday sales season represents a 33.1% jump on 2019. And last year Adobe said shoppers spent $4.2 billion online on Thanksgiving: this years’s numbers represent a jump of 42.3%. And leading up to today, each day this week had sales of more than $3 billion.

What’s going on? The figures are a hopefully encouraging sign that despite some of the economic declines of 2020 caused by the Covid-19 pandemic, retailers will at least be able to make up for some of their losses in the next couple of months, traditionally the most important period for sales.

As we have been reporting over the last several months, overall, 2020 has been a high watermark year for e-commerce, with the bigger trend of more browsing and shopping online — which has been growing for years — getting a notable boost from the Covid-19 pandemic.

The push for more social distancing to slow the spread of the coronavirus has driven many to stay away from crowded places like stores, and it has forced us to stay at home, where we have turned to the internet to get things done.

These trends are not only seeing those already familiar with online shopping spending more. It’s also introducing a new category of shoppers to that platform. Adobe said that so far this week, 9% of all sales have been “generated by net new customers as traditional brick-and-mortar shoppers turn online to complete transactions in light of shop closures and efforts to avoid virus transmission through in-person contact.”

Black Friday, the day after Thanksgiving, has traditionally been marked as the start of holiday shopping, but the growth of e-commerce has given more prominence to Thanksgiving Day, when physical stores are closed and many of us are milling about the house possibly with not much to do. This year seems to be following through on that trend.

“Families have many traditions during the holidays. Travel restrictions, stay-at-home orders and fear of spreading the virus are, however, preventing Americans from enjoying so many of them. Shopping online is one festive habit that can be maintained online and sales figures are showcasing that gifting remains a much beloved tradition this year,” said Taylor Schreiner, Director, Adobe Digital Insights, in a statement.

(That’s not to say that Black Friday won’t be big: Adobe predicts that it will break $10.3 billion in sales online this year.)

Some drilling down into what is selling:

Adobe said that board games and other categories that “bring the focus on family” are seeing a strong surge, with sales up five times over last year.

Similarly — in keeping with how much we are all shopping for groceries online now — grocery sales in the last week were up a whopping 596% compared to October, as people stocked up for the long weekend (whether or not, it seems, it was being spent with family).

Other top items include Hyrule Warriors: Age of Calamity, Just Dance 2021, as well as vTech toys and Rainbow High Dolls.

Amazon’s announcement this week that it would be offering more options for delivery this season speaks to how e-commerce is growing beyond simple home delivery, and how this has become a key part of how retailers are differentiating their businesses from each other. Curbside pickup has grown by 116% over last year this week, and expedited shipping is up 49%. 

Smartphones are going to figure strong once more too. Adobe said $25.5 billion has been spent via smartphones in November to date (up 48% over 2019), accounting for 38.6% of all e-commerce sales.

In the US big retailers continue to dominate how people shop, with the likes of Walmart, Target Amazon and others pulling in more than $1 billion in revenue annually collectively seeing their sales go up 147% since October. Part of the reason: more sophisticated websites, with conversion rates 100% higher than those of smaller businesses. (That leaves a big opening for companies that can build tools to help smaller businesses compete better on this front.)

Continue Reading

Uncategorized

AstraZeneca says it will likely do another study of COVID-19 vaccine after accidental lower dose shows higher efficacy

Published

on

AstraZeneca’s CEO told Bloomberg that the pharmaceutical company will likely conduct another global trial of the effectiveness of its COVID-19 vaccine trial, following the disclosure that the more effective dosage in the existing Phase 3 clinical trial was actually administered by accident. AstraZeneca and its partner the University of Oxford reported interim results that showed 62% efficacy for a full two-dose regimen, and a 90% efficacy rate for a half-dose followed by a full dose – which the scientists developing the drug later acknowledged was actually just an accidental administration of what was supposed to be two full doses.

To be clear, this shouldn’t dampen anyone’s optimism about the Oxford/AstraZeneca vaccine. The results are still very promising, and an additional trial is being done only to ensure that what was seen as a result of the accidental half-dosage is actually borne out when the vaccine is administered that way intentionally. That said, this could extend the amount of time that it takes for the Oxford vaccine to be approved in the U.S., since this will proceed ahead of a planned U.S. trial that would be required for the FDA to approve it for use domestically.

The Oxford vaccine’s rollout to the rest of the world likely won’t be affected, according to AstraZeneca’s CEO, since the studies that have been conducted, including safety data, are already in place from participants around the world outside of the U.S.

While vaccine candidates from Moderna and Pfizer have also shown very strong efficacy in early Phase 3 data, hopes are riding high on the AstraZeneca version because it relies on a different technology, can be stored and transported at standard refrigerator temperatures rather than frozen, and costs just a fraction per dose compared to the other two leading vaccines in development.

That makes it an incredibly valuable resource for global inoculation programs, including distribution where cost and transportation infrastructures are major concerns.

Continue Reading

Trending