Connect with us

Uncategorized

AI is wrestling with a replication crisis

Published

on

Last month Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier this year. Google was describing successful trials of an AI that looked for signs of breast cancer in medical images. But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech.

“We couldn’t take it anymore,” says Benjamin Haibe-Kains, the lead author of the response, who studies computational genomics at the University of Toronto. “It’s not about this study in particular—it’s a trend we’ve been witnessing for multiple years now that has started to really bother us.”

Haibe-Kains and his colleagues are among a growing number of scientists pushing back against a perceived lack of transparency in AI research. “When we saw that paper from Google, we realized that it was yet another example of a very high-profile journal publishing a very exciting study that has nothing to do with science,” he says. “It’s more an advertisement for cool technology. We can’t really do anything with it.”

Science is built on a bedrock of trust, which typically involves sharing enough details about how research is carried out to enable others to replicate it, verifying results for themselves. This is how science self-corrects and weeds out results that don’t stand up. Replication also allows others to build on those results, helping to advance the field. Science that can’t be replicated falls by the wayside.

At least, that’s the idea. In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics—and computer science overall—researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare.

Ambitious noob

AI is feeling the heat for several reasons. For a start, it is a newcomer. It has only really become an experimental science in the past decade, says Joelle Pineau, a computer scientist at Facebook AI Research and McGill University, who coauthored the complaint. “It used to be theoretical, but more and more we are running experiments,” she says. “And our dedication to sound methodology is lagging behind the ambition of our experiments.”

The problem is not simply academic. A lack of transparency prevents new AI models and techniques from being properly assessed for robustness, bias, and safety. AI moves quickly from research labs to real-world applications, with direct impact on people’s lives. But machine-learning models that work well in the lab can fail in the wild—with potentially dangerous consequences. Replication by different researchers in different settings would expose problems sooner, making AI stronger for everyone. 

AI already suffers from the black-box problem: it can be impossible to say exactly how or why a machine-learning model produces the results it does. A lack of transparency in research makes things worse. Large models need as many eyes on them as possible, more people testing them and figuring out what makes them tick. This is how we make AI in health care safer, AI in policing more fair, and chatbots less hateful.

What’s stopping AI replication from happening as it should is a lack of access to three things: code, data, and hardware. According to the 2020 State of AI report, a well-vetted annual analysis of the field by investors Nathan Benaich and Ian Hogarth, only 15% of AI studies share their code. Industry researchers are bigger offenders than those affiliated with universities. In particular, the report calls out OpenAI and DeepMind for keeping code under wraps.

Then there’s the growing gulf between the haves and have-nots when it comes to the two pillars of AI, data and hardware. Data is often proprietary, such as the information Facebook collects on its users, or sensitive, as in the case of personal medical records. And tech giants carry out more and more research on enormous, expensive clusters of computers that few universities or smaller companies have the resources to access.

To take one example, training the language generator GPT-3 is estimated to have cost OpenAI $10 to $12 million—and that’s just the final model, not including the cost of developing and training its prototypes. “You could probably multiply that figure by at least one or two orders of magnitude,” says Benaich, who is founder of Air Street Capital, a VC firm that invests in AI startups. Only a tiny handful of big tech firms can afford to do that kind of work, he says: “Nobody else can just throw vast budgets at these experiments.”

The rate of progress is dizzying, with thousands of papers published every year. But unless researchers know which ones to trust, it is hard for the field to move forward. Replication lets other researchers check that results have not been cherry-picked and that new AI techniques really do work as described. “It’s getting harder and harder to tell which are reliable results and which are not,” says Pineau.

What can be done? Like many AI researchers, Pineau divides her time between university and corporate labs. For the last few years, she has been the driving force behind a change in how AI research is published. For example, last year she helped introduce a checklist of things that researchers must provide, including code and detailed descriptions of experiments, when they submit papers to NeurIPS, one of the biggest AI conferences.

Replication is its own reward

Pineau has also helped launch a handful of reproducibility challenges, in which researchers try to replicate the results of published studies. Participants select papers that have been accepted to a conference and compete to rerun the experiments using the information provided. But the only prize is kudos.

This lack of incentive is a barrier to such efforts throughout the sciences, not just in AI. Replication is essential, but it isn’t rewarded. One solution is to get students to do the work. For the last couple of years, Rosemary Ke, a PhD student at Mila, a research institute in Montreal founded by Yoshua Bengio, has organized a reproducibility challenge where students try to replicate studies submitted to NeurIPS as part of their machine-learning course. In turn, some successful replications are peer-reviewed and published in the journal ReScience. 

“It takes quite a lot of effort to reproduce another paper from scratch,” says Ke. “The reproducibility challenge recognizes this effort and gives credit to people who do a good job.” Ke and others are also spreading the word at AI conferences via workshops set up to encourage researchers to make their work more transparent. This year Pineau and Ke extended the reproducibility challenge to seven of the top AI conferences, including ICML and ICLR. 

Another push for transparency is the Papers with Code project, set up by AI researcher Robert Stojnic when he was at the University of Cambridge. (Stojnic is now a colleague of Pineau’s at Facebook.) Launched as a stand-alone website where researchers could link a study to the code that went with it, this year Papers with Code started a collaboration with arXiv, a popular preprint server. Since October, all machine-learning papers on arXiv have come with a Papers with Code section that links directly to code that authors wish to make available. The aim is to make sharing the norm.

Do such efforts make a difference? Pineau found that last year, when the checklist was introduced, the number of researchers including code with papers submitted to NeurIPS jumped from less than 50% to around 75%. Thousands of reviewers say they used the code to assess the submissions. And the number of participants in the reproducibility challenges is increasing.

Sweating the details

But it is only a start. Haibe-Kains points out that code alone is often not enough to rerun an experiment. Building AI models involves making many small changes—adding parameters here, adjusting values there. Any one of these can make the difference between a model working and not working. Without metadata describing how the models are trained and tuned, the code can be useless. “The devil really is in the detail,” he says.

It’s also not always clear exactly what code to share in the first place. Many labs use special software to run their models; sometimes this is proprietary. It is hard to know how much of that support code needs to be shared as well, says Haibe-Kains.

Pineau isn’t too worried about such obstacles. “We should have really high expectations for sharing code,” she says. Sharing data is trickier, but there are solutions here too. If researchers cannot share their data, they might give directions so that others can build similar data sets. Or you could have a process where a small number of independent auditors were given access to the data, verifying results for everybody else, says Haibe-Kains.

Hardware is the biggest problem. But DeepMind claims that big-ticket research like AlphaGo or GPT-3 has a trickle-down effect, where money spent by rich labs eventually leads to results that benefit everyone. AI that is inaccessible to other researchers in its early stages, because it requires a lot of computing power, is often made more efficient—and thus more accessible—as it is developed. “AlphaGo Zero surpassed the original AlphaGo using far less computational resources,” says Koray Kavukcuoglu, vice president of research at DeepMind.

In theory, this means that even if replication is delayed, at least it is still possible. Kavukcuoglu notes that Gian-Carlo Pascutto, a Belgian coder at Mozilla who writes chess and Go software in his free time, was able to re-create a version of AlphaGo Zero called Leela Zero, using algorithms outlined by DeepMind in its papers. Pineau also thinks that flagship research like AlphaGo and GPT-3 is rare. The majority of AI research is run on computers that are available to the average lab, she says. And the problem is not unique to AI. Pineau and Benaich both point to particle physics, where some experiments can only be done on expensive pieces of equipment such as the Large Hadron Collider.

In physics, however, university labs run joint experiments on the LHC. Big AI experiments are typically carried out on hardware that is owned and controlled by companies. But even that is changing, says Pineau. For example, a group called Compute Canada is putting together computing clusters to let universities run large AI experiments. Some companies, including Facebook, also give universities limited access to their hardware. “It’s not completely there,” she says. “But some doors are opening.”

Haibe-Kains is less convinced. When he asked the Google Health team to share the code for its cancer-screening AI, he was told that it needed more testing. The team repeats this justification in a formal reply to Haibe-Kains’s criticisms, also published in Nature: “We intend to subject our software to extensive testing before its use in a clinical environment, working alongside patients, providers and regulators to ensure efficacy and safety.” The researchers also said they did not have permission to share all the medical data they were using.

It’s not good enough, says Haibe-Kains: “If they want to build a product out of it, then I completely understand they won’t disclose all the information.” But he thinks that if you publish in a scientific journal or conference, you have a duty to release code that others can run. Sometimes that might mean sharing a version that is trained on less data or uses less expensive hardware. It might give worse results, but people will be able to tinker with it. “The boundaries between building a product versus doing research are getting fuzzier by the minute,” says Haibe-Kains. “I think as a field we are going to lose.” 

Research habits die hard

If companies are going to be criticized for publishing, why do it at all? There’s a degree of public relations, of course. But the main reason is that the best corporate labs are filled with researchers from universities. To some extent the culture at places like Facebook AI Research, DeepMind, and OpenAI is shaped by traditional academic habits. Tech companies also win by participating in the wider research community. All big AI projects at private labs are built on layers and layers of public research. And few AI researchers haven’t made use of open-source machine-learning tools like Facebook’s PyTorch or Google’s TensorFlow.

As more research is done in house at giant tech companies, certain trade-offs between the competing demands of business and research will become inevitable. The question is how researchers navigate them. Haibe-Kains would like to see journals like Nature split what they publish into separate streams: reproducible studies on one hand and tech showcases on the other.

But Pineau is more optimistic. “I would not be working at Facebook if it did not have an open approach to research,” she says. 

Other large corporate labs stress their commitment to transparency too. “Scientific work requires scrutiny and replication by others in the field,” says Kavukcuoglu. “This is a critical part of our approach to research at DeepMind.”

“OpenAI has grown into something very different from a traditional laboratory,” says Kayla Wood, a spokesperson for the company. “Naturally that raises some questions.” She notes that OpenAI works with more than 80 industry and academic organizations in the Partnership on AI to think about long-term publication norms for research.

Pineau believes there’s something to that. She thinks AI companies are demonstrating a third way to do research, somewhere between Haibe-Kains’s two streams. She contrasts the intellectual output of private AI labs with that of pharmaceutical companies, for example, which invest billions in drugs and keep much of the work behind closed doors.

The long-term impact of the practices introduced by Pineau and others remains to be seen. Will habits be changed for good? What difference will it make to AI’s uptake outside research? A lot hangs on the direction AI takes. The trend for ever larger models and data sets—favored by OpenAI, for example—will continue to make the cutting edge of AI inaccessible to most researchers. On the other hand, new techniques, such as model compression and few-shot learning, could reverse this trend and allow more researchers to work with smaller, more efficient AI.

Either way, AI research will still be dominated by large companies. If it’s done right, that doesn’t have to be a bad thing, says Pineau: “AI is changing the conversation about how industry research labs operate.” The key will be making sure the wider field gets the chance to participate. Because the trustworthiness of AI, on which so much depends, begins at the cutting edge. 

Uncategorized

A tween tries Apple’s new ‘Family Setup’ system for Apple Watch

Published

on

With the release of watchOS 7, Apple at last turned the Apple Watch into the GPS-based kid tracker parents have wanted, albeit at a price point that requires careful consideration. As someone in the target demographic for such a device — a parent of a “tween” who’s allowed to freely roam the neighborhood (but not without some sort of communication device) — I put the new Family Setup system for the Apple Watch through its paces over the past couple of months.

The result? To be frank, I’m conflicted as to whether I’d recommend the Apple Watch to a fellow parent, as opposed to just suggesting that it’s time to get the child a phone.

This has to do, in part, with the advantages offered by a dedicated family tracking solution — like Life360, for example — as well as how a child may respond to the Apple Watch itself, and the quirks of using a solution that wasn’t initially designed with the needs of family tracking in mind.

As a parent of a busy and active tween (nearly 11), I can see the initial appeal of an Apple Watch as a family tracker. It has everything you need for that purpose: GPS tracking, the ability to call and text, alerts, and access to emergency assistance. It’s easy to keep up with, theoretically, and it’s not as pricey as a new iPhone. (The new Apple Watch SE cellular models start at $329. The feature also works on older Apple Watch Series 4 or later models with cellular. Adding on the Apple Watch to your phone plan is usually around $10 per month more.)

I think the Apple Watch as a kid tracker mainly appeals to a specific type of parent: one who’s worried about the dangers of giving a younger child a phone and thereby giving them access to the world of addictive apps and the wider internet. I understand that concern, but I personally disagree with the idea that you should wait until a child is “older,” then hand them a phone and say “ok, good luck with that!” They need a transition period and the “tween” age range is an ideal time frame to get started.

The reality is that smartphones and technology are unavoidable. As a parent, I believe it’s my job to introduce these things in small measures — with parental controls and screen time limits, for example. And then I need to monitor their usage. I may make mistakes and so will my daughter, but we both need these extra years to figure out how to balance parenting and the use of digital tools. With a phone, I know I will have to have the hard conversations about the problems we run into. I understand, too, why parents want to put that off, and just buy a watch instead.

Image Credits: TechCrunch

After my experience, I feel the only cases where I’d fully endorse the Apple Watch would be for those tech-free or tech-light families where kids will not be given phones at any point, households where kids’ phone usage is highly restricted (like those with Wi-Fi only phones), or those where kids don’t get phones until their later teenage years. I am not here to convince them of my alternative, perhaps more progressive view on when to give a kid a phone. The Apple Watch may make sense for these families and that’s their prerogative.

However, a number of people may be wondering if the Apple Watch can be a temporary solution for perhaps a year or two before they buy the child a smartphone. To them, I have to say this feels like an expensive way to delay the inevitable, unavoidable task of having to parent your child through the digital age.

Given my position on the matter, my one big caveat to this review is that my daughter does, in fact, have a smartphone. Also, let’s be clear: this is not meant to be a thorough review of the Apple Watch itself, or a detailed report of its various “tech specs”. It’s a subjective report as to how things went for us that, hopefully, you can learn from.

Image Credits: Apple

To begin, the process of configuring the new Apple Watch with Family Setup was easy. “Set Up for a Family Member” is one of two setup options to tap on as you get started. Apple offers a simple user interface that walks you through pairing the Watch with your phone and all the choices that have to be made, like enabling cellular, turning on “Ask to Buy” for app purchases, enabling Schooltime and Activity features, and more.

What was harder was actually using the Apple Watch as intended after it was configured. I found it far easier to launch an iPhone app (like Life360, which we use) where everything you need is in one place. That turned out not to be true for Apple Watch Family Setup system.

For the purpose of testing the Apple Watch with Family Setup, my daughter would leave her iPhone behind when she went out biking or when meeting up with friends for outdoor activities.

As a child who worked her way up to an iPhone over a couple of years, I have to admit I was surprised at how irresponsible she was with the watch in the early weeks.

She didn’t at all respect at the multi-hundred dollar device it was, at first, but rather treated it like her junk jewelry or her wrist-worn scrunchies. The Apple Watch was tossed on a dresser, a bathroom counter, a kitchen table, on a beanbag chair, and so on.

Thankfully, the “Find My” app can locate the Apple Watch, if it has battery and a signal. But I’m not going to lie — there were some scary moments where a dead watch was later found on the back of a toilet (!!), on the top of the piano, and once, abandoned at a friend’s house.

And this, from a child who always knows where her iPhone is!

The problem is that her iPhone is something she learned to be responsible for after years of practice. This fooled me into thinking she actually was responsible for expensive devices. For two years, we painfully went through a few low-end Android phones while she got the hang of keeping up with and caring for such a device. Despite wrapping those starter phones in protective cases, we still lost one to a screen-destroying crash on a tile floor and another to being run over by a car. (How it flew out of a pocket and into the middle of the road, I’ll never understand!)

But, eventually, she did earn access to a hand-me down iPhone. And after initially only being allowed to use it in the house on Wi-Fi, that phone now goes outdoors and has its own phone number. And she has been careful with it in the months since. (Ahem, knocks on wood.)

The Apple Watch, however, held no such elevated status for her. It was not an earned privilege. It was not fun. It was not filled with favorite apps and games. It was, instead, thrust upon her.

While the iPhone is used often for enjoyable and addictive activities like Roblox, TikTok, Disney+, and Netflix, the Apple Watch was boring by comparison. Sure, there are a few things you can do on the device — it has an App Store! You can make a Memoji! You can customize different watch faces! But unless this is your child’s first-ever access to technology, these features may have limited appeal.

“Do you want to download this game? This looks fun,” I suggested. pointing to a coloring game, as we looked at her Watch together one night.

“No thanks,” she replied.

“Why not?”

“I just think don’t think it would be good on the little screen.”

“Maybe a different game?”

“Nah.”

And that was that. I could not convince her to give a single Apple Watch app a try in the days that followed.

She didn’t even want to stream music on the Apple Watch — she has Alexa for that, she pointed out. She didn’t want to play a game on the watch — she has Roblox on the bigger screen of her hand-me down laptop. She also has a handheld Nintendo Switch.

Image Credits: TechCrunch

Initially, she picked an Apple Watch face that matched her current “aesthetic” — simple and neutral — and that was the extent of her interest in personalizing the device in the first several weeks.

Having already burned herself out on Memoji by borrowing my phone to play with the feature when it launched, there wasn’t as much interest in doing more with the customized avatar creation process, despite my suggestions to try it. (She had already made a Memoji her Profile photo for her contact card on iPhone.)

However, I later showed her the Memoji Watch Face option after I set it up, and asked her if she liked it. She responded “YESSSS. I love it,” and snatched the watch from my hand to play some more.

Demo’ing features is important, it seems.

But largely, the Apple Watch was only strapped on only at my request as she walked out the door.

Soon, this became a routine.

“Can I go outside and play?”

“Yes. Wear the watch!,” I’d reply.

“I knowwww.”

It took over a month to get to the point that she would remember the watch on her own.

I have to admit that I didn’t fully demo the Apple Watch to her or explain how to use it in detail, beyond a few basics in those beginning weeks. While I could have made her an expert, I suppose, I think it’s important to realize that many parents are less tech-savvy than their kids. The children are often left to fend for themselves when it comes to devices, and this particular kid has had several devices. For that reason, I was curious how a fairly tech literate child who has moved from iPad to Android to now iPhone, and who hops from Windows to Mac to Chromebook, would now adapt to an Apple Watch.

As it turned out, she found it a little confusing.

“What do you think about the Watch?” I asked one evening, feeling her out for an opinion.

“It’s fun…but sometimes I don’t really understand it,” she replied.

“What don’t you understand?”

“I don’t know. Just…almost everything,” she said, dramatically, as tweens tend to do. “Like, sometimes  I don’t know how to turn up and down the volume.”

Upon prodding, I realize she meant this: she was confused about how to adjust the alert volume for messages and notifications, as well as how to change the Watch from phone calls to a vibration or to silence calls altogether with Do Not Disturb. (It was her only real complaint, but annoying enough to be “almost everything,” I guess!)

I’ll translate now from kid language what I learned here.

First, given that the “Do Not Disturb” option is accessible from a swipe gesture, it’s clear my daughter hadn’t fully explored the watch’s user interface. It didn’t occur to her that the swipe gestures of the iPhone would have their own Apple Watch counterparts. (And also, why would you swipe up from the bottom of the screen for the Control Center when that doesn’t work on the iPhone anymore? On iPhone, you now swipe down from the top-right to get to Control Center functions.)

And she definitely hadn’t discovered the tiny “Settings” app (the gear icon) on the Apple Watch’s Home Screen to make further changes.

Instead, her expectation was that you should be able to use either a button on the side for managing volume — you know, like on a phone — or maybe the digital crown, since that’s available here. But these physical features of the device — confusingly — took her to that “unimportant stuff” like the Home Screen and an app switcher, when in actuality, it was calls, notifications, and alerts that were the app’s main function, in her opinion.

And why do you need to zoom into the Home Screen with a turn of the digital crown? She wasn’t even using the apps at this point. There weren’t that many on the screen.

Curious, since she didn’t care for the current lineup of apps, I asked for feedback.

“What kind of apps do you want?,” I asked.

“Roblox and TikTok.”

“Roblox?!,” I said, laughing. “How would that even work?”

As it turned out, she didn’t want to play Roblox on her watch. She wanted to respond to her incoming messages and participate in her group chats from her watch.

Oh. That’s actually a reasonable idea. The Apple Watch is, after all, a messaging device.

And since many kids her age don’t have a phone or the ability to use a messaging app like Snapchat or Instagram, they trade Roblox usernames and friend each other in the game as way to work around this restriction. They then message each other to arrange virtual playdates or even real-life ones if they live nearby.

But the iOS version of the Roblox mobile app doesn’t have an Apple Watch counterpart.

“And TikTok?” I also found this hilarious.

But the fact that Apple Watch is not exactly an ideal video player is lost on her. It’s a device with a screen, connected to the internet. So why isn’t that enough, she wondered?

“You could look through popular TikToks,” she suggested. “You wouldn’t need to make an account or anything,” she clarified, as these details were would fix the only problems she saw with her suggestion.

Even if the technology was there, a TikTok experience on the small screen would never be a great one. But this goes to show how much interest in technology is directly tied to what apps and games are available, compared with the technology platform itself.

Other built-in features had even less appeal than the app lineup.

Image Credits: Apple

Though I had set up some basic Activity features during the setup process, like a “Move Goal,” she had no idea what any of that was. So I showed her the “rings” and how they worked, and she thought it was kind of neat that the Apple Watch could track her standing. However, there was no genuine interest or excitement in being able quantify her daily movement — at least, not until one day many weeks later when were hiking and she heard my watch ding as my rings closed and wanted to do the same on hers. She became interested in recording her steps for that hike, but the interest wasn’t sustained afterwards.

Apple said it built in the Activity features so kids could track their move goal and exercise progress. But I would guess many kids won’t care about this, even if they’re active. After all, kids play — they don’t think “how much did I play?” Did I move enough today? And nor should they, really.

As a parent, I can see her data in the Health app on my iPhone, which is the device I use to manage her Apple Watch. It’s interesting, perhaps, to see things like her steps walked or flights climbed. But it’s not entirely useful, as her Apple Watch is not continually worn throughout the day. (She finds the bands uncomfortable — we tried Sport Band and Sport Loop and she still fiddles with them constantly, trying to readjust them for comfort.)

In addition, if I did want to change her Activity goals later on for some reason, I’d have to do from her Watch directly.

Of course, a parent doesn’t buy a child an Apple Watch to track their exercise. It’s for the location tracking features. That is the only real reason a parent would consider this device for a younger child.

On that front, I did like that the watch was a GPS tracker that was looped into our household Apple ecosystem as its own device with its own phone number. I liked that I could ping the Watch with “Find My” when it’s lost — and it was lost a lot, as I noted. I liked that I could manage the Watch from my iPhone, since it’s very difficult to reacquire a device to make changes, once it’s handed over to someone else.

I also liked the Apple Watch was always available for use. This may have been one of its biggest perks, in fact. Unlike my daughter’s iPhone, which is almost constantly at 10-20% battery (or much less), the watch was consistently charged and ready when it was time for outdoor play.

I liked that it was easier for her to answer a call on the Apple Watch compared with digging her phone out of her bike basket or bag. I liked that she didn’t have to worry about constantly holding onto her phone while out and about.

I also appreciated that I could create geofenced alerts — like when she reached the park or a friend’s house, for example, or when she left. But I didn’t like that the ability to do so is buried in the “Find My” app. (You tap on the child’s name in the “People” tab. Tap “Add” under “Notifications.” Tap “Notify Me.” Tap “New Location.” Do a search for an address or venue. Tap “Done.”)

Image Credits: TechCrunch

I also didn’t like that when I created a recurring geofence, my daughter would be notified. Yes, privacy. I know! But who’s in charge here? My daughter is a child, not a teen. She knows the Apple Watch is a GPS tracker — we had that conversation. She knows it allows me to see where she is. She’s young and for now, she doesn’t feel like this a privacy violation. We’ll have that discussion later, I’m sure. But at the present, she likes the feel of this electronic tether to home as she experiments with expanding the boundaries of her world.

When I tweak and update recurring alerts for geofenced locations, such alerts can be confusing or even concerning. I appreciate that Apple is being transparent and trying to give kids the ability to understand they’re being tracked — but I’d also argue that most parents who suddenly gift an expensive watch to their child will explain why they’re doing so. This is a tool, not a toy.

Also, the interface for configuring geofences is cumbersome. By comparison, the family tracking app Life360 which we normally use has a screen where you simply tap add, search to find the location, and then you’re done. One tap on a bell icon next to the location turns on or off its alerts. (You can get all granular about it: recurring, one time, arrives, leaves, etc. — but you don’t have to. Just tap and be alerted. It’s more straightforward.)

Image Credits: Apple

One feature I did like on the Apple Watch, but sadly couldn’t really use, was its Schooltime mode — a sort of remotely-enabled, scheduled version of Do Not Disturb. This feature blocks apps and complications and turns on the Do Not Disturb setting for the kids, while letting emergency calls and notifications break through. (Make sure to set up Shared Contacts, so you can manage that aspect.)

Currently, we have no use for Schooltime, thanks to this pandemic. My daughter is attending school remotely this year. I could imagine how this may be helpful one day when she returns to class.

But I also worry that if I sent her to class with the Apple Watch, other kids will judge her for her expensive device. I worry that teachers (who don’t know about Schooltime), will judge me for having her wear it. I worry kids will covet it and ask to try it on. I worry a kid running off with it, causing additional disciplinary headaches for teachers. I worry it will get smashed on the playground or during PE, or somehow fall off because she meddled with the band for the umpteenth time. I worry she’ll take it off because “the strap is so annoying” (as I was told), then leave it in her desk.

I don’t worry as much about the iPhone at school, because it stays in her backpack the whole time due to school policy. It doesn’t sit on her arm as a constant temptation, “Schooltime” mode or otherwise.

The Apple Watch Family Setup is also not a solution that adapts as the child ages to the expanding needs of teen monitoring, compared with other family tracking solutions.

To continue the Life360 comparison, the app today offers features for teen drivers and its new privacy-sensitive location “bubbles” for teens now give them more autonomy. Apple’s family tracking solution, meanwhile, becomes more limited as the child ages up.

For instance, Schooltime doesn’t work on an iPhone. Once the child upgrades to an iPhone, you are meant to use parental controls and Screen Time features to manage what apps are allowed and when she can use her device. It seems a good transitional step to the phone would be a way to maintain Schooltime mode on the child’s next device, too.

Instead, by buying into Apple Watch for its Family Setup features, what you’ll soon end up with is a child who now owns both an Apple Watch and a smartphone. (Sure, you could regift it or take it back, I suppose…I certainly do wish you luck if you try that!)

Beyond the overboard embrace of consumerism that is buying an Apple Watch for a child, the biggest complaint I had was that there were three different apps for me to use to manage and view data associated with my daughter’s Apple Watch. I could view her tracked activity was tracked in my Health app. Location-tracking and geofence configuration was in the Find My app. And remotely configuring the Apple Watch itself, including Schooltime, was found in my Watch mobile app.

I understand that Apple built the Watch to be a personal device designed for use with one person and it had to stretch to turn it into a family tracking system. But what Apple is doing here is really just pairing the child’s watch with the parent’s iPhone and then tacking on extra features, like Schooltime. It hasn’t approached this as a whole new system designed from the ground-up for families or for their expanding needs as the child grows.

As a result, the whole system feels underdeveloped compared with existing family tracking solutions. And given the numerous features to configure, adjust, and monitor, Family Setup deserves its own app or at the very least, its own tab in a parent’s Watch app to simplify its use.

At the end of the day, if you are letting your child out in the world — beyond school and supervised playdates — the Apple Watch is a solution, but it may not be the best solution for your needs. If you have specific reasons why your child will not get their own phone now or anytime soon, the Apple Watch may certainly work. But if you don’t have those reasons, it may be time to try a smartphone.

Both Apple and Google now offer robust parental control solutions for their smartphone platforms that can mitigate many parents’ concerns over content and app addiction. And considering the cost of a new Apple Watch, the savings just aren’t there — especially when considering entry-level Android phones or other hand-me-down phones as the alternative.

[Apple provided a loaner device for the purposes of this review. My daughter was cited and quoted with permission but asked for her name to not be used.]

Continue Reading

Uncategorized

While mainland America struggles with covid apps, tiny Guam has made them work

Published

on

As covid-19 cases spiral out of control in the US, states are scrambling to fight the virus with an increasingly stretched arsenal. Many of them have the same weapons at their disposal: restrictions on public gatherings and enforcement of mask wearing, plus testing, tracing, and exposure notifications.

But while many states struggle to get their systems to work together, Guam—a tiny US territory closer to the Korean Peninsula than the North American mainland—may offer clues on how to rally communities around at least one part of the puzzle: smartphone contact tracing.

With no budget, and relying almost entirely on a grassroots volunteer effort, Guam has gotten 29% of the island’s adult residents to download its exposure notification app, a rate of adoption that outstrips states with far more resources. 

A collaborative effort 

Guam diagnosed its first covid cases in March, but a few weeks later, it gained international attention—and a much bigger case load—when a covid-stricken US Navy ship was ordered to dock at the Naval base on the island. Sailors who tested negative were quarantined in local hotels and forbidden from interacting with civilians.

Having so many positive cases on the island drove home how vulnerable the island really was—but it also created a lot of new volunteers looking for ways to help out. 

Around the same time, Vince Munoz, a developer at the Guam-based software company NextGenSys, got a call. The island was being offered a partnership with the PathCheck Foundation, a nonprofit that was building government contact tracing apps. Munoz immediately saw an opportunity to help his community fight this new threat.

“It is something you do to help other people,” Munoz says. “It empowers you to help reduce the spread of the virus.” 

Digital contact tracing is a potentially low-touch way for health departments to reduce the spread of covid-19 by using smartphones to track who’s been exposed. And even if exposure notifications aren’t the panacea many technologists hoped for, new research suggests that breaking even a few links in the chain of transmission can save lives. 

So Munoz’s team of volunteers connected with PathCheck—which was founded at MIT—and they started building an app called Covid Alert. Like the majority of America’s exposure notification apps, it uses a system built by Google and Apple and uses Bluetooth signals to alert people that they’ve crossed paths with someone who later tests positive. From there, they are urged to contact the island’s local health authorities and take appropriate action. Everything is done anonymously to protect privacy. 

After several months of testing and tweaking, the app was ready. But it was still missing an important piece: users. After all, any contact tracing app needs as many downloads as possible to make a difference. Munoz knew just the people to build buzz: the Guam Visitors Bureau. Tourism is massively important to the island, which gets more than 1.5 million visitors each year—almost 10 times the local population. In pre-pandemic times, the bureau helped tourists plan trips to Guam’s “star-sand beaches.” Staff jumped at the chance to help.

With assistance from Thane Hancock, a CDC epidemiologist based on the island, and Janela Carrera, public information officer for the Guam Department of Public Health and Social Services, the team started building a marketing campaign.

“Because we didn’t have any funding, we decided to do a grassroots campaign,” says Monica Guzman, CEO of Guam-based marketing company Galaide Group, who works with the bureau. Guam is a very small community. We’re all either related or neighbors or friends.”

While PathCheck and Munoz’s development team worked on building the app, the Visitors Bureau began reaching out to community groups and nonprofits to build awareness. It hosted Zoom calls with organizations, schools, and cultural groups across the island with the message that the app could help suppress the virus, if enough people were willing to “be a covid warrior.”

“The schools, the government agencies, the media, they all jumped on board,” Carrera says.

Together, these efforts are part of what ethics researchers at the Swiss Federal Institute of Technology recently called the “piecemeal creation of public trust.” To get people to use a novel technology like exposure notification, you have to reach people where they live and get buy-in from community leaders. 

It takes a village (on WhatsApp)

Once the app was ready to launch in September, it was time to get the word out. 

The day before the official launch, Visitors Bureau marketing manager Russell Ocampo sent a message about the app to Guam’s notoriously large and unruly WhatsApp groups. That message ricocheted around the island, resulting in almost 3,000 downloads immediately. “I received it back like 10 times from other people,” he says. 

A further 6,000 people signed up the next day during a press conference, including the governor, who downloaded it while live on the air. 

The effort received a show of support that many US states and territories could only dream of. All three major telecom companies on the island sent free texts encouraging people to download the app. A local TV station, meanwhile, ran a two-hour “download-a-thon,” to try driving uptake. The show featured performances by local musicians, interspersed with information about the app, including debunking myths about privacy and other ongoing concerns. Viewers were offered the chance to win $10,000 in prize money, much of it donated personally by Guam Visitors Bureau members and others who worked on the app, if they could prove they downloaded the app during the program. 

The Guam Visitors Bureau has offered other cash prizes for government agencies whose employees rack up the most downloads. And small businesses, eager to get the economy back on its feet, have offered give-aways to customers — one shopping center is offering a box of chocolates to visitors who download the app.

Challenges

But, crucially, has the app worked? Despite a successful launch, Guam’s covid-19 response has faced major challenges overall. Many people, especially those from minority ethnic groups who came to Guam from other Pacific islands, live in multi-generational, overcrowded housing, often with limited access to healthcare and even basic hygiene tools like municipal sewage. The health department recently launched door-to-door testing in these neighborhoods, and found positivity rates as high as 29%.

At the beginning of April, the governor’s office projected that the virus could kill 3,000 people—almost 2% of the island’s population—over the next five months. That dire prediction has yet to come true. As of Monday, Nov. 30, 112 people have reportedly died of covid on the island. Overall, the territory’s trajectory has been typical of America itself: Cases remained low through most of the summer, before ticking steadily up through the fall and spiking in early November. 

While a large proportion of residents have downloaded the app, one major challenge has been getting people to upload positive test results. This is in part because people are often in shock when they first receive the news about their diagnosis, according to Janela Carrera, the health department officer.

Contact tracers call everyone who tests positive, and part of their script involves recommending that people upload their positive result: That’s how the app knows to send (anonymous) exposure notifications to people who’ve been near each other. But that first call can feel extremely stressful, and it’s not a great time to suggest they try out a new app or go through the process of entering a special numerical code that kicks off the chain of notifications. 

“Especially if they’re symptomatic, they may feel like, ‘oh my gosh, I may not make it through this,’ or ‘I might be infecting others in my home.’ So [contact tracers] follow up with them a few days later, once they’ve had a chance to recuperate, and offer the code then,” Carrera says.

Clearly, though, some people are uploading the codes. “I’ve had co-workers tell me, ‘Janela, oh my God, I got a notification!’” Carrera says. Ocampo himself received one in October, and quarantined for 14 days. 

This is boosted by the fact that when public health workers do their door-to-door testing, they offer information about how to download the app. At the same time, other strategies, often shared through multilingual PSAs on local radio, may be more effective for people in these communities, who often don’t use smartphones for anything more than texting, according to Munoz.

Guam faces one other challenge that’s very common worldwide. It’s difficult to know exactly what effect the app is having, says Sam Zimmermann, CTO of PathCheck Foundation.

Zimmermann says: “Because Guam cares a lot about privacy and making sure their systems are safe, their app doesn’t have any kind of analytics or logging,” like whether users actually learn how the app works after downloading it or whether they pay attention if they receive an exposure notification. 

Still, while the team launched the app hoping to achieve a 60% download rate based on an early mathematical model, there’s now evidence that even a much smaller portion of the population using it may have a positive impact.

Munoz, for one, hopes the app will help take pressure off health officials doing labor-intensive outreach like door-to-door testing.

“Manual contact tracers have a very difficult job. They can’t keep up with everyone who tests positive,” Munoz says. “Any little percentage helps.”

This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.

Continue Reading

Uncategorized

With an eye for what’s next, longtime operator and VC Josh Elman gets pulled into Apple

Published

on

Josh Elman is moving over to Apple, he announced on Twitter today, saying he will be focused on the company’s App Store and helping “customers discover the best apps for them.”

Asked for more details about his new role, Elman referred us to Apple, which confirmed his employment but declined to offer more, including about his new title. (This is typical operating procedure for the tech giant.)

Certainly, Elman has plenty of experience with fast-growing technologies and popular apps in particular.  One of his first jobs out of Stanford was with RealNetworks, a bubble-era internet streaming company that went public in 1997, three years after it was founded. (It remains publicly traded, though its market cap is just $60 million these days.)

After RealNetworks, it was on to LinkedIn, which Elman joined in 2004 as a senior product manager when the company was just two years old.  From there, Elman worked in product management at the custom apparel and accessories company Zazzle, then at Facebook, then Twitter.

Perhaps unsurprisingly, the venture firm Greylock brought Elman into the fold in 2011 as a principal, and by 2013, he was a general partner, investing in social networking deals throughout like Musical.ly (Bytedance acquired the company and turned it into TikTok); Nextdoor (which is reportedly eyeing ways to go public); Houseparty (acquired last year by Epic Games, which is now suing Apple); and Discord (which is sewing up a private funding deal at a valuation of roughly $7 billion).

Somewhat unexpectedly, in 2018, Elman left his full-time role with Greylock to join a company notably not in the firm’s portfolio, the stock-trading platform Robinhood. As interesting, though he took on the role of VP of product at the popular and fast-growing startup, he didn’t cut ties with Greylock entirely, taking on the title of venture partner and remaining on as a board member to his companies.

Asked about the move, Elman told TC at the time that he had “started talking with a few of my partners about how I want to spend the next decade of my professional life. What gets me the most energized is when I can dig in on product with a hyper-growth company.”

Ultimately, the role didn’t last long, with Elman leaving last November after less than two years on the job. Now Elman — who said he’s stepping away from some of his Greylock-related board seats —  has a new chance to do what he loves most that from one of the most powerful perches in the world, the App Store.

“I’m really excited to get to build ways to help over a billion customers and millions of developers connect,” he tweeted earlier. He added in the same thread: “I recently found my college resume. My career objective was ‘To create great technology that changes people’s lives’. Still at it :)”

Continue Reading

Trending