Connect with us

Uncategorized

Yeah, Apple’s M1 MacBook Pro is powerful, but it’s the battery life that will blow you away

Published

on

Survival and strategy games are often played in stages. You have the early game where you’re learning the ropes, understanding systems. Then you have mid-game where you’re executing and gathering resources. The most fun part, for me, has always been the late mid-game where you’re in full control of your powers and skills and you’ve got resources to burn — where you execute on your master plan before the endgame gets hairy.

This is where Apple is in the game of power being played by the chip industry. And it’s about to be endgame for Intel. 

Apple has introduced three machines that use its new M1 system on a chip, based on over a decade’s worth of work designing its own processing units based on the ARM instructions set. These machines are capable, assured and powerful, but their greatest advancements come in the performance per watt category.

I personally tested the 13” M1 MacBook Pro and after extensive testing, it’s clear that this machine eclipses some of the most powerful Mac portables ever made in performance while simultaneously delivering 2x-3x the battery life at a minimum. 

These results are astounding, but they’re the product of that long early game that Apple has played with the A-series processors. Beginning in earnest in 2008 with the acquisition of PA Semiconductor, Apple has been working its way towards unraveling the features and capabilities of its devices from the product roadmaps of processor manufacturers.  

The M1 MacBook Pro runs smoothly, launching apps so quickly that they’re often open before your cursor leaves your dock. 

Video editing and rendering is super performant, only falling behind older machines when it leverages the GPU heavily. And even then only with powerful dedicated cards like the 5500M or VEGA II. 

Compiling projects like WebKit produce better build times than nearly any machine (hell the M1 Mac Mini beats the Mac Pro by a few seconds). And it does it while using a fraction of the power. 

This thing works like an iPad. That’s the best way I can describe it succinctly. One illustration I have been using to describe what this will feel like to a user of current MacBooks is that of chronic pain. If you’ve ever dealt with ongoing pain from a condition or injury, and then had it be alleviated by medication, therapy or surgery, you know how the sudden relief feels. You’ve been carrying the load so long you didn’t know how heavy it was. That’s what moving to this M1 MacBook feels like after using other Macs. 

Every click is more responsive. Every interaction is immediate. It feels like an iOS device in all the best ways. 

At the chip level, it also is an iOS device. Which brings us to…

iOS on M1

The iOS experience on the M1 machines is…present. That’s the kindest thing I can say about it. Apps install from the App Store and run smoothly, without incident. Benchmarks run on iOS apps show that they perform natively with no overhead. I even ran an iOS-based graphics benchmark which showed just fine. 

That, however, is where the compliments end. The current iOS app experience on an M1 machine running Big Sur is almost comical; it’s so silly. There is no default tool-tip that explains how to replicate common iOS interactions like swipe-from-edge — instead a badly formatted cheat sheet is buried in a menu. The apps launch and run in windows only. Yes, that’s right, no full-screen iOS apps at all. It’s super cool for a second to have instant native support for iOS on the Mac, but at the end of the day this is a marketing win, not a consumer experience win. 

Apple gets to say that the Mac now supports millions of iOS apps, but the fact is that the experience of using those apps on the M1 is sub-par. It will get better, I have no doubt. But the app experience on the M1 is pretty firmly in this order right now: Native M1 app>Rosetta 2 app>Catalyst app> iOS app. Provided that the Catalyst ports can be bothered to build in Mac-centric behaviors and interactions, of course. But it’s clear that iOS, though present, is clearly not where it needs to be on M1.

Rosetta 2

There is both a lot to say and not a lot to say about Rosetta 2. I’m sure we’ll get more detailed breakdowns of how Apple achieved what it has with this new emulation layer that makes x86 applications run fine on the M1 architecture. But the real nut of it is that it has managed to make a chip so powerful that it can take the approximate 26% hit (see the following charts) in raw power to translate apps and still make them run just as fast if not faster than MacBooks with Intel processors. 

It’s pretty astounding. Apple would like us to forget the original Rosetta from the PowerPC transition as much as we would all like to forget it. And I’m happy to say that this is pretty easy to do because I was unable to track any real performance hit when comparing it to older, even ‘more powerful on paper’ Macs like the 16” MacBook Pro. 

It’s just simply not a factor in most instances. And companies like Adobe and Microsoft are already hard at work bringing native M1 apps to the Mac, so the most needed productivity or creativity apps will essentially get a free performance bump of around 30% when they go native. But even now they’re just as fast. It’s a win-win situation. 

Methodology

My methodology  for my testing was pretty straightforward. I ran a battery of tests designed to push these laptops in ways that reflected both real world performance and tasks as well as synthetic benchmarks. I ran the benchmarks with the machines plugged in and then again on battery power to estimate constant performance as well as performance per watt. All tests were run multiple times with cooldown periods in between in order to try to achieve a solid baseline. 

Here are the machines I used for testing:

  • 2020 13” M1 MacBook Pro 8-core 16GB
  • 2019 16” Macbook Pro 8-core 2.4GHz 32GB w/5500M
  • 2019 13” MacBook Pro 4-core 2.8GHz 16GB
  • 2019 Mac Pro 12-Core 3.3GHz 48GB w/AMD Radeon Pro Vega II 32GB

Many of these benchmarks also include numbers from the M1 Mac mini review from Matt Burns and the M1 MacBook Air, tested by Brian Heater which you can check out here.

Compiling WebKit

Right up top I’m going to start off with the real ‘oh shit’ chart of this piece. I checked WebKit out from GitHub and ran a build on all of the machines with no parameters. This is the one deviation from the specs I mentioned above as my 13” had issues that I couldn’t figure out so I had some Internet friends help me. Also thanks to Paul Haddad of Tapbots for guidance here. 

As you can see, the M1 performs admirably well across all models, with the MacBook and Mac Mini edging out the MacBook Air. This is a pretty straightforward way to visualize the difference in performance that can result in heavy tasks that last over 20 minutes, where the MacBook Air’s lack of active fan cooling throttles back the M1 a bit. Even with that throttling, the MacBook Air still beats everything here except for the very beefy Mac Pro. 

But, the big deal here is really this second chart. After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery. In comparison, I could have gotten through about 3 on the 16” and the 13” 2020 model only had one go in it. 

This insane performance per watt of power is the M1’s secret weapon. The battery performance is simply off the chart. Even with processor-bound tasks. To give you an idea, throughout this build of WebKit the P-cluster (the power cores) hit peak pretty much every cycle while the E-cluster (the efficiency cores) maintained a steady 2GHz. These things are going at it, but they’re super power efficient.

Battery Life

In addition to charting battery performance in some real world tests, I also ran a couple of dedicated battery tests. In some cases they ran so long I thought I had left it plugged in by mistake, it’s that good. 

I ran a mixed web browsing and web video playback script that hit a series of pages, waited for 30 seconds and then moved on to simulate browsing. The results return a pretty common sight in our tests, with the M1 outperforming the other MacBooks by just over 25%.

In fullscreen 4k/60 video playback, the M1 fares even better, clocking an easy 20 hours with fixed 50% brightness. On an earlier test, I left the auto-adjust on and it crossed the 24 hour mark easily. Yeah, a full day. That’s an iOS-like milestone.

The M1 MacBook Air does very well also, but its smaller battery means a less playback time at 16 hours. Both of them absolutely decimated the earlier models.

Xcode Unzip

This was another developer-centric test that was requested. Once again, CPU bound, and the M1’s blew away any other system in my test group. Faster than the 8-core 16” MacBook Pro, wildly faster than the 13” MacBook Pro and yes, 2x as fast as the 2019 Mac Pro with its 3.3GHz Xeons. 

Image Credits: TechCrunch

For a look at the power curve (and to show that there is no throttling of the MacBook Pro over this period (I never found any throttling over longer periods by the way) here’s the usage curve.

Unified Memory and Disk Speed

Much ado has been made of Apple including only 16GB of memory on these first M1 machines. The fact of it, however, is that I have been unable to push them hard enough yet to feel any effect of this due to Apple’s move to unified memory architecture. Moving RAM to the SoC means no upgradeability — you’re stuck on 16GB forever. But it also means massively faster access to that memory by chips on the system that need it most.

If I was a betting man I’d say that this was an intermediate step to eliminating RAM altogether. It’s possible that a future (far future, this is the play for now) version of Apple’s M-series chips could end up supplying memory to each of the various chips from a vast pool that also serves as permanent storage. For now, though, what you’ve got is a finite, but blazing fast, pool of memory shared between the CPU cores, GPU and other SoC denizens like the Secure Enclave and Neural Engine. 

If you want an indicator of how Apple thinks about the tight connection loop between OS X Big Sur and the M1 processor, which it was built for and vice-versa, note that the M1 MacBook Pro’s about screen does not show clock speeds at all.

While running many applications simultaneously, the M1 performed extremely well. Because this new architecture is so close, with memory being a short hop away next door rather than out over a PCIe bus, swapping between applications was zero issue. Even while tasks were run in the background — beefy, data heavy tasks — the rest of the system stayed flowing.

Even when the memory pressure tab of Activity Monitor showed that OS X was using swap space, as it did from time to time, I noticed no slowdown in performance. 

Though I wasn’t able to trip it up I would guess that you would have to throw a single, extremely large file at this thing to get it to show any amount of struggle. 

The SSD in the M1 MacBook Pro is running on a PCIe 3.0 bus, and its write and read speeds indicate that. 

 

Thunderbolt and Webcam

The M1 MacBook Pro has two Thunderbolt controllers, one for each port. This means that you’re going to get full PCIe 3.0 x4 speeds out of each and that it seems very likely that Apple could include up to 4 ports in the future without much change in architecture. 

This configuration also means that you can easily power an Apple Pro Display XDR and another monitor besides. I was unable to test two Apple Pro Display XDR monitors side-by-side.

Oh, those webcam improvements. Apple says that the ISP in the M1 machines is improved from the previous generations. But the camera itself is the same 720p webcam we’ve had in MacBooks forever. In my tests the result of that is that the webcam is still bad, it’s just less bad. Maybe it crosses the line into ‘ok’ because of the better white balance and slightly  improved noise handling. It’s still not great though.

Cooling and throttling

No matter how long the tests I ran were, I was never able to ascertain any throttling of the CPU on the M1 MacBook Pro. From our testing it was evident that in longer operations (20-40 minutes on up) it was possible to see the MacBook Air pulling back a bit over time. Not so with the Macbook Pro. 

Apple says that it has designed a new ‘cooling system’ in the M1 MacBook Pro, which holds up. There is a single fan but it is noticeably quieter than either of the other fans. In fact, I was never able to get the M1 much hotter than ‘warm’ and the fan ran at speeds that were much more similar to that of a water cooled rig than the turbo engine situation in the other MacBooks. 

Even running a long, intense Cinebench 23 session could not make the M1 MacBook get loud. Over the course of the mark running all high-performance cores regularly hit 3GHz and the efficiency cores hitting 2GHz. Despite that, it continued to run very cool and very quiet in comparison to other MacBooks. It’s the stealth bomber at the Harrier party.

In that Cinebench test you can see that it doubles the multi-core performance of last year’s 13” MacBook and even beats out the single-core performance of the 16” MacBook Pro. 

I ran a couple of Final Cut Pro tests with my test suite. First was a 5 minute 4k60 timeline shot with iPhone 12 Pro using audio, transitions, titles and color grading. The M1 Macbook performed fantastic, slightly beating out the 16” MacBook Pro. 

 

 

With an 8K timeline of the same duration, the 16” MacBook Pro with its Radeon 5500M was able to really shine with FCP’s GPU acceleration. The M1 held its own though, showing 3x faster speeds than the 13” MacBook Pro with its integrated graphics. 

 

And, most impressively, the M1 MacBook Pro used extremely little power to do so. Just 17% of the battery to output an 81GB 8k render. The 13” MacBook Pro could not even finish this render on one battery charge. 

As you can see in these GFXBench charts, while the M1 MacBook Pro isn’t a powerhouse gaming laptop we still got some very surprising and impressive results in tests of the GPU when a rack of Metal tests were run on it. The 16″ MBP still has more raw power, but rendering games at retina is still very possible here.

The M1 is the future of CPU design

All too often over the years we’ve seen Mac releases hamstrung by the capabilities of the chips and chipsets that were being offered by Intel. Even as recently as the 16” MacBook Pro, Apple was stuck a generation or more behind. The writing was basically on the wall once the iPhone became such a massive hit that Apple began producing more chips than the entire rest of the computing industry combined. 

Apple has now shipped over 2 billion chips, a scale that makes Intel’s desktop business look like a luxury manufacturer. I think it was politic of Apple to not mention them by name during last week’s announcement, but it’s also clear that Intel’s days are numbered on the Mac and that their only saving grace for the rest of the industry is that Apple is incredibly unlikely to make chips for anyone else.

Years ago I wrote an article about the iPhone’s biggest flaw being that its performance per watt limited the new experiences that it was capable of delivering. People hated that piece but I was right. Apple has spent the last decade “fixing” its battery problem by continuing to carve out massive performance gains via its A-series chips all while maintaining essentially the same (or slightly better) battery life across the iPhone lineup. No miracle battery technology has appeared, so Apple went in the opposite direction, grinding away at the chip end of the stick.

What we’re seeing today is the result of Apple flipping the switch to bring all of that power efficiency to the Mac, a device with 5x the raw battery to work with. And those results are spectacular.

Continue Reading
Comments

Uncategorized

Vivenu, a ticketing API for events, closes a $15M Series A round led by Balderton Capital

Published

on

vivenu, a ticketing platform that offers an API for venues and promoters to customize to their needs, has closed a $15 million (€12.6 million) in Series A funding led by Balderton Capital. Previous investor Redalpine also participated.

Historically-speaking, most ticketing platform startups took a direct to consumer approach, or have provided turnkey solutions to big event promoters. But in this day and age, most events require a great deal more flexibility, not least because of the pandemic. So, by offering an API and allowing promoters that flexibility, Vivenu managed to gain traction.

Venues and event owners get a full-featured ticketing, out-of-the-box platform with full real-time dynamic control over all aspects of selling tickets including configuring prices and seating plans, leveraging customer data and insights and mastering a branded look and feel across their sales channels. It has exposed APIs enabling many different custom use cases for large international ticket sellers. Since its Seed funding in March, the company says it has sold over 2 million tickets.

Simon Hennes, CEO and co-founder of vivenu said in a statement: “We created vivenu to address the need of ticket sellers for a user-centric ticketing platform. Event organizers were stuck with solutions that heavily depend on manual processes, causing high costs, dependencies, and frustration on various levels.”

Daniel Waterhouse, Partner at Balderton said: “Vivenu has built a sophisticated product and set of APIs that gives event organisers full control of their ticketing operations.”

vivenu is also the first European investment of Aurum Fund LLC, the fund associated with the San Francisco 49ers. Also investing in the round are Angels including Sascha Konietzke (Founder at Contentful), Chris Schagen (former CMO at Contentful), Sujay Tyle (Founder at Frontier Car Group) and Tiny VC.

In March 2020, vivenu secured €1.4 million in seed funding, bringing its total funding to €14 million. Previous investors include early-stage venture capital investor Redalpine, GE32 Capital and Hansel LLC (associated with the founders of Loft).

Speaking to TechCrunch Hennes said: “You have to send your seat map to Ticketmaster, and then the account manager comes back to you with a sitemap. This goes back and forth and takes ages. With us you have a seating chart designer basically integrated into the software which you can simply change yourself.”

Continue Reading

Uncategorized

Nordigen introduces free European open banking API

Published

on

Latvian fintech startup Nordigen is switching to a freemium model thanks to a free open banking API. Open banking was supposed to democratize access to banking information, but the company believes banking aggregation APIs from Tink or Plaid are too expensive. Instead, Nordigen thinks it can provide a free API to access account information and paid services for analytics and insights services.

Open banking is a broad term and means different things, from account aggregation to verifying account ownership and payment initiation. The most basic layer of open banking is the ability to view data from third-party financial institutions. For instance, some banks let you connect to other bank accounts so that you can view all your bank accounts from a single interface.

There are two ways to connect to a bank. Some banks provide an application programming interface (API), which means that you can send requests to the bank’s servers and receive data in return.

While all financial institutions should have an open API due to the European PSD2 directive, many banks are still dragging their feet. That’s why open banking API companies usually rely on screen scraping. They mimic web browser interactions, which means that it’s slow, it requires a ton of server resources and it can break.

“If you’re wondering how we’d be able to afford it, our free banking data API was designed purely with PSD2 in mind, meaning it’s lightweight in strong contrast to that of incumbents. So it wouldn’t significantly increase our costs to scale free users,” Nordigen co-founder and CEO Rolands Mesters told me.

So you don’t get total coverage with Nordigen’s API. The startup currently supports 300 European banks, which covers 60 to 90% of the population in each country. But it’s hard to complain when it’s a free product anyway.

Some Nordigen customers will probably want more information. Nordigen provides financial data analytics. It can be particularly useful if you’re a lending company trying to calculate a credit score, if you’re a financial company with minimum income requirements and more.

For those additional services, you’ll have to pay. Nordigen currently has 50 clients and expects to attract more customers with its new freemium strategy.

Continue Reading

Uncategorized

Databand raises $14.5M led by Accel for its data pipeline observability tools

Published

on

DevOps continues to get a lot of attention as a wave of companies develop more sophisticated tools to help developers manage increasingly complex architectures and workloads. In the latest development, Databand — an AI-based observability platform for data pipelines, specifically to detect when something is going wrong with a datasource when an engineer is using a disparate set of data management tools — has closed a round of $14.5 million.

Josh Benamram, the CEO who co-founded the company with Victor Shafran and Evgeny Shulman, said that Databand plans include more hiring; to continue adding customers for its existing product; to expand the library of tools that its providing to users to cover an ever-increasing landscape of DevOps software, where it is a big supporter of open source resources; as well as to invest in the next steps of its own commercial product. That will include more remediation once problems are identified: that is, in addition to identifying issues, engineers will be able to start automatically fixing them, too.

The Series A is being led by Accel with participation from Blumberg Capital, Lerer Hippeau, Ubiquity Ventures, Differential Ventures, and Bessemer Venture Partners. Blumberg led the company’s seed round in 2018. It has now raised around $18.5 million and is not disclosing valuation.

The problem that Databand is solving is one that is getting more urgent and problematic by the day (as evidenced by this exponential yearly rise in zettabytes of data globally). And as data workloads continue to grow in size and use, they continue to become ever more complex.

On top of that, today there are a wide range of applications and platforms that a typical organization will use to manage source material, storage, usage and so on. That means when there are glitches in any one data source, it can be a challenge to identify where and what the issue can be. Doing so manually can be time-consuming, if not impossible.

“Our users were in a constant battle with ETL (extract transform load) logic,” said Benamram, who spoke to me from New York (the company is based both there and in Tel Aviv, and also has developers and operations in Kiev). “Users didn’t know how to organize their tools and systems to produce reliable data products.”

It is really hard to focus attention on failures, he said, when engineers are balancing analytics dashboards, how machine models are performing, and other demands on their time; and that’s before considering when and if a data supplier might have changed an API at some point, which might also throw the data source completely off.

And if you’ve ever been on the receiving end of that data, you know how frustrating (and perhaps more seriously, disastrous) bad data can be. Benamram said that it’s not uncommon for engineers to completely miss anomalies and for them to only have been brought to their attention by “CEO’s looking at their dashboards and suddenly thinking something is off.” Not a great scenario.

Databand’s approach is to use big data to better handle big data: it crunches various pieces of information, including pipeline metadata like logs, runtime info, and data profiles, along with information from Airflow, Spark, Snowflake, and other sources, and puts the resulting data into a single platform, to give engineers a single view of what’s happening better see where bottlenecks or anomalies are appearing, and why.

There are a number of other companies building data observability tools — Splunk perhaps is one of the most obvious, but also smaller players like Thundra and Rivery. These companies might step further into the area that Databand has identified and is fixing, but for now Databand’s focus specifically on identifying and helping engineers fix anomalies has given it a strong profile and position.

Accel partner Seth Pierrepont said that Databand came to the VC’s attention in perhaps the best way it could: Accel needed a solution like it for its own internal work.

“Data pipeline observability is a challenge that our internal data team at Accel was struggling with. Even at our relatively small scale, we were having issues with the reliability of our data outputs on a weekly basis, and our team found Databand as a solution,” he said. “As companies in all industries seek to become more data driven, Databand delivers an essential product that ensures the reliable delivery of high quality data for businesses. Josh, Victor and Evgeny have a wealth of experience in this area, and we’ve been impressed with their thoughtful and open approach to helping data engineers better manage their data pipelines with Databand.”

The company is also used by data teams from both large Fortune 500 enterprises to smaller startups.

Continue Reading

Trending