Connect with us

Uncategorized

Review: The iPhone 12 Pro Max is worth its handling fee

Published

on

The iPhone 12 Pro Max is probably the easiest of all of the new iPhone 12 models to review. It’s huge and it has a really, really great camera. Probably one of the best cameras ever in a smartphone if not the best. For those of you coming from an iPhone “Max” or “Plus” model already, it’s a no brainer. Get it, it’s fantastic. It’s got everything Apple has to offer this year and it’s even a bit thinner than the iPhone 11 Pro Max. 

For everyone else — the potential upsizers — this review has only a single question to answer: Do the improvements in camera and screen size and potentially battery life make it worth dealing with the hit in handling ergonomics from its slim but thicc build?

The answer? Yes, but only in certain conditions. Let’s get into it.

Build

I’m not going to spend a ton of time on performance or go through a feature-by-feature breakdown of the iPhone 12 Pro Max. I’ve published a review of the iPhone 12 and iPhone 12 Pro here and just today published a review of the iPhone 12 mini. You can check those out for baseline chat about the whole lineup. 

Instead, I’m going to focus specifically on the differences between the iPhone 12 Pro Max and the rest of the lineup. This makes sense because Apple has returned us to a place that we haven’t been since the iPhone 8. 

Though the rest of the lineup provides a pretty smooth arc of choices, the iPhone 12 Pro Max introduces a pretty solid cliff of unique features that could pull some people up from the iPhone 12 Pro. 

The larger size sets off all of the work Apple did to make the iPhone 12 Pro look like a jewel. Gold coated steel edges and the laminated clear and frosty back with gold accent rings around the cameras and glossy logo. All of it screams posh. 

Some of you may recall that there was a period of time where there existed a market for ultra-luxury phone makers like Vertu to use fine materials to “elevate” what were usually pretty poorly implemented Symbian or Android phones at heart. Leather, gold, crystal and even diamond were used to craft veblen goods for the über rich just so they could stay ‘above’ the proles. Now, Apple’s materials science experimentation and execution level is so high that you really can’t get anything on the level of this kind of pure luxe manifestation in a piece of consumer electronics from anyone else, even a ‘hand maker’. 

To be fair, Vertu and other makers didn’t die because Apple got good at gold, they died because good software is needed to invest life into these bejeweled golems. But Apple got better at what they did faster than they could ever get good at what Apple does. 

This is a great piece of kit and as mentioned even thinner than previous Max models with the same size screen but it’s about the same width (.3mm wider). And in my opinion, the squared off edges of this year’s aesthetic make this phone harder to hold, not easier at this size. This is essentially the opposite effect from the smaller models. For a phone this size I’d imagine everyone is going to use a case anyway so that’s probably moot, but it’s worth noting. 

My feelings on the larger iPhones, which I haven’t used as a daily driver since the iPhone 8, remain unchanged: these are two-handed devices best used as tablet or even laptop replacements. If you run your life from these phones then it makes sense that you’d want a huge screen with plenty of real-estate for a browser and a pip video chat and a generous keyboard all at once. 

The differences 

When we’re talking about whether or not to move up to this beast I think it’s helpful to have a list of everything here that is different, or you think might be but isn’t, from the iPhone 12 Pro. 

Screen. The 6.7” iPhone 12 Pro Max screen has a resolution of 2778×1284 at 458 ppi. That’s nearly identical but slightly under the iPhone 12 Pro’s 460ppi. So though this is a difference I’d count it as a wash. The screen’s size, of course, and the software support that some Apple and third-party apps to take advantage of the increased real-estate are still a factor. 

Performance. The iPhone 12 Pro Max performs exactly as you’d expect it to in the CPU and GPU department, which is to say exactly the same as the iPhone 12 Pro. It also has the same 6GB of RAM on board. Battery performance was comparable to my iPhone 11 Pro Max testing which is to say it outlasted a typical waking day though I could probably nail it in a long travel day. 

Ultra wide angle camera. Exactly the same. Improved over the iPhone 11 Pro massively due to software correction and the addition of Night Mode, but the same across the iPhone 12 Pro lineup.

Telephoto camera. This is a tricky one because it uses the same sensor as the iPhone 12 Pro, but features a new lens assembly that results in a 2.5x (65mm equivalent) zoom factor. This means that though the capture quality is the same, you can achieve tighter framing at the same distance away from your subject. As a heavy telephoto user (I shot around 30% of my pictures over the last year in the iPhone 11 Pro’s telephoto) I love this additional control and the slightly higher optical compression that comes with it. 

The framing control is especially nice with portraits. 

Though it comes in handy with distant subjects as well.

There is also one relatively stealthy (I cannot find this on the website but I verified that it is true) update to the telephoto. It is the only lens other than the wide angle across all of the iPhone 12 lineup to also get the new optical stabilization upgrades that allow it to make 5,000 micro-adjustments per second to stabilize an image in low light or shade. It still uses the standard lens-style stabilization, not the new sensor-shift OIS used in the wide angle lens, but it goes up 5x in the amount of adjustments it can make from the iPhone 11 Pro or even the iPhone 12 Pro. 

The results of this can be seen in this shot, a handheld indoor snap. Aside from the tighter lens crop, the additional stabilization adjustments result in a crisper shot with finer detail even though the base sensor is identical. It’s a relatively small improvement in comparison to the wide angle, but it’s worth mentioning and worth loving if you’re a heavy telephoto user.  

Wide angle camera. The bulk of the iPhone 12 Pro Max difference is right here. This is a completely new camera that pushes the boundaries of what the iPhone has been capable of shooting to this point. It’s actually made up of 3 big changes:

  • A new f1.6 aperture camera. A larger aperture is plain and simple a bigger hole that lets more light in.
  • A larger sensor with 1.7 micron pixels (bigger pixels mean better light gathering and color rendition), a larger sensor means higher quality images.
  • An all-new sensor-shift OIS system that stabilizes the sensor, not the lens. This is advantageous for a few reasons. Sensors are lighter than lenses, which lets the adjustments happen faster because it can be moved, stopped and started again with more speed and precision. 

Sensor-shift OIS systems are not new, they were actually piloted in the Minolta Dimage A1 back in 2003. But most phone cameras have used lens shift technology because it is very common, vastly cheaper and easier to implement. 

All three things work together to deliver pretty stellar imaging results. It also makes the camera bump on the iPhone 12 Pro Max a bit taller. Tall enough that there is actually an additional lip on the case meant for it made by Apple to cover it. I’d guess that this additional thickness stems directly from the wide angle lens assembly needing to be larger to accommodate the sensor and new OIS mechanism and then Apple being unwilling to let one camera stick out further than any other. 

These are Night Mode samples, but even there you can see the improvements in brightness and sharpness. Apple claims 87% more light gathering ability with this lens and in the right conditions it’s absolutely evident. Though you won’t be shooting SLR-like images in near darkness (Night Mode has its limits and tends to get pretty impressionistic when it gets very dim) you can absolutely see the pathway that Apple has to get there if it keeps making these kinds of improvements. 

Wide angle shots from the iPhone 12 Pro Max display slightly better sharpness, lower noise and better color rendition than the iPhone 12 Pro and much more improvement from the iPhone 11 Pro. In bright conditions you will be hard pressed to tell the difference between the two iPhone 12 models but if you’re on the lookout the signs are there. Better stabilization when handheld in open shade, better noise levels in dimmer areas and slightly improved detail sharpness. 

The iPhone 12 Pro already delivers impressive results year on year, but the iPhone 12 Pro Max leapfrogs it within the same generation. It’s the most impressive gain Apple’s ever had in a model year, image wise. The iPhone 7 Plus and the introduction of Apple’s vision of a blended camera array was forward looking, but even then image quality was pretty much parity with the smaller models that year. 

A very significant jump this year. Can’t wait for this camera to trickle down the lineup.

LiDAR. I haven’t really mentioned LiDAR benefits yet, but I went over them extensively in my iPhone 12 Pro review, so I’ll cite them here.

LiDAR is an iPhone 12 Pro and iPhone 12 Pro Max only feature. It enables faster auto-focus lock-in in low light scenarios as well as making Portrait Mode possible on the Wide lens in Night Mode shots. 

First, the auto-focus is insanely fast in low light. The image above is what is happening, invisibly, to enable that. The LiDAR array constantly scans the scene with an active grid of infrared light, producing depth and scene information that the camera can use to focus. 

In practice, what you’ll see is that the camera snaps to focus quickly in dark situations where you would normally find it very difficult to get a lock at all. The LiDAR-assisted low light Portrait Mode is very impressive, but it only works with the Wide lens. This means that if you are trying to capture a portrait and it’s too dark, you’ll get an on-screen prompt that asks you to zoom out. 

These Night  Mode portraits are demonstrably better looking than the standard portrait mode of the iPhone 11 because those have to be shot with the telephoto, meaning a smaller, darker aperture. They also do not have the benefit of the brighter sensor or LiDAR helping to separate the subject from the background — something that gets insanely tough to do in low light with just RGB sensors.

As a note, the LiDAR features will work great in situations under 5 meters along with Apple’s Neural Engine, to produce these low-light portraits. Out beyond that it’s not much use because of light falloff. 

Well lit Portrait Mode shots on the iPhone 12 Pro Max will still rely primarily on the information coming in through the lenses optically, rather than LiDAR. It’s simply not needed for the most part if there’s enough light.

The should I buy it workflow

I’m straight up copying a couple of sections for you now from my iPhone 12 Pro and iPhone 12 mini reviews because the advice applies across all of these devices. Fair warning.

In my iPhone 12/12 Pro review I noted my rubric for selecting a personal device:

  • The most compact and unobtrusive shape.
  • The best camera that I can afford.

And this is the conclusion I came to at the time:

The iPhone 12 Pro is bested in the camera department by the iPhone 12 Pro Max, which has the biggest and best sensor Apple has yet created. (But its dimensions are similarly biggest.) The iPhone 12 has been precisely cloned in a smaller version with the iPhone 12 mini. By my simple decision-making matrix, either one of those are a better choice for me than either of the models I’ve tested. If the object becomes to find the best compromise between the two, the iPhone 12 Pro is the pick.

But now that I’ve had time with the Pro Max and the mini, I’ve been able to work up a little decision flow for you:

If you haven’t gathered it by now, I recommend the iPhone 12 Pro Max to two kinds of people: the ones who want the absolute best camera quality on a smartphone period and those who do the bulk of their work on a phone rather than on another kind of device. There is a distinct ‘fee’ that you pay in ergonomics to move to a Max iPhone. Two hands are just plain needed for some operations and single-handed moves are precarious at best. 

Of course, if you’re already self selected into the cult of Max then you’re probably just wondering if this new one is worth a jump from the iPhone 11 Pro Max. Shortly: maybe not. It’s great but it’s not light years better unless you’re doing photography on it. Anything older though and you’re in for a treat. It’s well made, well equipped and well priced. The storage upgrades are less expensive than ever and it’s really beautiful. 

Plus, the addition of the new wide angle to the iPhone 12 Pro Max makes this the best camera system Apple has ever made and quite possibly the best sub compact camera ever produced. I know, I know, that’s a strong statement but I think it’s supportable because the iPhone is best in class when it comes to smartphones, and no camera company on the planet is doing the kind of blending and computer vision Apple is doing. Though larger sensor compact cameras still obliterate the iPhone’s ability to shoot in low light situations, the progress over time of Apple’s ML-driven blended system.

A worthy upgrade, if you can pay the handling costs.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Mike Cagney is testing the boundaries of the banking system for himself — and others

Published

on

Founder Mike Cagney is always pushing the envelope, and investors love him for it. Not long sexual harassment allegations prompted him to leave SoFi, the personal finance company that he cofounded in 2011, he raised $50 million for new lending startup called Figure that has since raised at least $225 million from investors and was valued a year ago at $1.2 billion.

Now, Cagney is trying to do something unprecedented with Figure, which says it uses a blockchain to more quickly facilitate home equity, mortgage refinance, and student and personal loan approvals. The company has applied for a national bank charter in the U.S., wherein it would not take FDIC-insured deposits but it could take uninsured deposits of over $250,000 from accredited investors.

Why does it matter? The approach, as American Banker explains it, would bring regulatory benefits. As it reported earlier this week, “Because Figure Bank would not hold insured deposits, it would not be subject to the FDIC’s oversight. Similarly, the absence of insured deposits would prevent oversight by the Fed under the Bank Holding Company Act. That law imposes restrictions on non-banking activities and is widely thought to be a deal-breaker for tech companies where banking would be a sidelight.”

Indeed, if approved, Figure could pave the way for a lot of fintech startups — and other retail companies that want to wheel and deal lucrative financial products without the oversight of the Federal Reserve Board or the FDIC — to nab non-traditional bank charters.

As Michelle Alt, whose year-old financial advisory firm helped Figure with its application, tells AB: “This model, if it’s approved, wouldn’t be for everyone. A lot of would-be banks want to be banks specifically to have more resilient funding sources.” But if it’s successful, she adds, “a lot of people will be interested.”

One can only guess at what the ripple effects would be, though the Bank of Amazon wouldn’t surprise anyone who follows the company.

In the meantime, the strategy would seemingly be a high-stakes, high-reward development for a smaller outfit like Figure, which could operate far more freely than banks traditionally but also without a safety net for itself or its customers. The most glaring danger would be a bank run, wherein those accredited individuals who are today willing to lend money to the platform at high interest rates began demanding their money back at the same time. (It happens.)

Either way, Cagney might find a receptive audience right now with Brian Brooks, a longtime Fannie Mae executive who served as Coinbase’s chief legal officer for two years before jumping this spring to the Office of the Comptroller of the Currency (OCC), an agency that ensures that national banks and federal savings associations operate in a safe and sound manner.

Brooks was made acting head of the agency in May and green-lit one of the first national charters to go to a fintech, Varo Money, this past summer. In late October, the OCC also granted SoFi preliminary, conditional approval over its own application for a national bank charter.

While Brooks isn’t commenting on speculation around Figure’s application, in July, during a Brookings Institution event, he reportedly commented about trade groups’ concerns over his efforts to grant fintechs and payments companies charters, saying: “I think the misunderstanding that some of these trade groups are operating under is that somehow this is going to trigger a lighter-touch charter with fewer obligations, and it’s going to make the playing field un-level . . . I think it’s just the opposite.”

Christopher Cole, executive vice president at the trade group Independent Community Bankers of America, doesn’t seem persuaded. Earlier this week, he expressed concern about Figure’s bank charter application to AB, saying he suspects that Brooks “wants to approve this quickly before he leaves office.”

Brooks’s days are surely numbered. Last month, he was nominated by President Donald to a full five-year term leading the federal bank regulator and is currently awaiting Senate confirmation. The move — designed to slow down the incoming Biden administration — could be undone by President-elect Joe Biden, who can fire the comptroller of the currency at will and appoint an acting replacement to serve until his nominee is confirmed by the Senate.

Still, Cole’s suggestion is that Brooks still has enough time to figure out a path forward for Figure — and if its novel charter application is approved, and it stands up to legal challenges — a lot of other companies, too.

Continue Reading

Uncategorized

We read the paper that forced Timnit Gebru out of Google. Here’s what it says

Published

on

On the evening of Wednesday, December 2, Timnit Gebru, the co-lead of Google’s ethical AI team, announced via Twitter that the company had forced her out. 

Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them. She also cofounded the Black in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of the most diverse in AI, and includes many leading experts in their own right. Peers in the field envied it for producing critical work that often challenged mainstream AI practices.

A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she co-authored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.

Online, many other leaders in the field of AI ethics are arguing that the company pushed her out because of the inconvenient truths that she was uncovering about a core line of its research—and perhaps its bottom line. More than 1,400 Google staff and 1,900 other supporters have also signed a letter of protest.

Many details of the exact sequence of events that led up to Gebru’s departure are not yet clear; both she and Google have declined to comment beyond their posts on social media. But MIT Technology Review obtained a copy of the research paper from  one of the co-authors, Emily M. Bender, a professor of computational linguistics at the University of Washington. Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online, it gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.

Titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” the paper lays out the risks of large language models—AIs trained on staggering amounts of text data. These have grown increasingly popular—and increasingly large—in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”

The paper

The paper, which builds off the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here. 

Environmental and financial costs

Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

Strubell’s study found that one language model with a particular type of “neural architecture search” (NAS) method would have produced the equivalent of 626,155 pounds (284 metric tons) of carbon dioxide—about the lifetime output of five average American cars. A version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate—nearly the same as a roundtrip flight between New York City and San Francisco.

Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write.

Massive data, inscrutable models

Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there’s a risk that racist, sexist, and otherwise abusive language ends up in the training data.

An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

Moreover, because the training datasets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, […] undocumented training data perpetuates harm without recourse.”

Research opportunity costs

The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).

Illusions of meaning

The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.

Why it matters

Gebru and Bender’s paper has six co-authors, four of whom are Google researchers. Bender asked to avoid disclosing their names for fear of repercussions. (Bender, by contrast, is a tenured professor: “I think this is underscoring the value of academic freedom,” she says.)

The paper’s goal, Bender says, was to take stock of the landscape of current research in natural-language processing. “We are working at a scale where the people building the things can’t actually get their arms around the data,” she said. “And because the upsides are so obvious, it’s particularly important to step back and ask ourselves, what are the possible downsides? … How do we get the benefits of this while mitigating the risk?”

In his internal email, Dean, the Google AI head, said one reason the paper “didn’t meet our bar” was that it “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias. 

However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It’s the sort of work that no individual or even pair of authors can pull off,” Bender said. “It really required this collaboration.” 

The version of the paper we saw does also nod to several research efforts on reducing the size and computational costs of large language models, and on measuring the embedded bias of models. It argues, however, that these efforts have not been enough. “I’m very open to seeing what other references we ought to be including,” Bender said.

Nicolas Le Roux, a Google AI researcher in the Montreal office, later noted on Twitter that the reasoning in Dean’s email was unusual. “My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review,” he said.

Dean’s email also says that Gebru and her colleagues gave Google AI only a day for an internal review of the paper before they submitted it to a conference for publication. He wrote that “our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.”

Bender noted that even so, the conference would still put the paper through a substantial review process: “Scholarship is always a conversation and always a work in progress,” she said. 

Others, including William Fitzgerald, a former Google PR manager, have further cast doubt on Dean’s claim: 

Google pioneered much of the foundational research that has since led to the recent explosion in large language models. Google AI was the first to invent the Transformer language model in 2017 that serves as the basis for the company’s later model BERT, and OpenAI’s GPT-2 and GPT-3. BERT, as noted above, now also powers Google search, the company’s cash cow.

Bender worries that Google’s actions could create “a chilling effect” on future AI ethics research. Many of the top experts in AI ethics work at large tech companies because that is where the money is. “That has been beneficial in many ways,” she says. “But we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.”

Continue Reading

Uncategorized

Daily Crunch: Slack and Salesforce execs explain their big acquisition

Published

on

We learn more about Slack’s future, Revolut adds new payment features and DoorDash pushes its IPO range upward. This is your Daily Crunch for December 4, 2020.

The big story: Slack and Salesforce execs explain their big acquisition

After Salesforce announced this week that it’s acquiring Slack for $27.7 billion, Ron Miller spoke to Slack CEO Stewart Butterfield and Salesforce President and COO Bret Taylor to learn more about the deal.

Butterfield claimed that Slack will remain relatively independent within Salesforce, allowing the team to “do more of what we were already doing.” He also insisted that all the talk about competing with Microsoft Teams is “overblown.”

“The challenge for us was the narrative,” Butterfield said. “They’re just good [at] PR or something that I couldn’t figure out.”

Startups, funding and venture capital

Revolut lets businesses accept online payments — With this move, the company is competing directly with Stripe, Adyen, Braintree and Checkout.com.

Health tech venture firm OTV closes new $170M fund and expands into Asia — This year, the firm led rounds in telehealth platforms TytoCare and Lemonaid Health.

Zephr raises $8M to help news publishers grow subscription revenue — The startup’s customers already include publishers like McClatchy, News Corp Australia, Dennis Publishing and PEI Media.

Advice and analysis from Extra Crunch

DoorDash amps its IPO range ahead of blockbuster IPO — The food delivery unicorn now expects to debut at $90 to $95 per share, up from a previous range of $75 to $85.

Enter new markets and embrace a distributed workforce to grow during a pandemic — Is this the right time to expand overseas?

Three ways the pandemic is transforming tech spending — All companies are digital product companies now.

(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

WH’s AI EO is BS — Devin Coldewey is not impressed by the White House’s new executive order on artificial intelligence.

China’s internet regulator takes aim at forced data collection — China is a step closer to cracking down on unscrupulous data collection by app developers.

Gift Guide: Games on every platform to get you through the long, COVID winter — It’s a great time to be a gamer.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Continue Reading

Trending