What even is a software bug, in the mind of people who've never done development themselves? A glitch? An error? Unexpected behavior?
(Strictly speaking, it's the latter: a discrepancy between the specification and the implementation. But oftentimes, that happens because this part of the "specification" only existed in someone's head, thinking it should "obviously" behave this way.)
Often satirized by a well-known comic with a tree and a swing.
Apparently engrained in Apple's engineering philosophy in the past decade and a half is that multiple portions of the software and hardware stack scale up and down a lot.
Their CPUs were scaled down from the iPhone to the Apple Watch, but also way up to high-end laptops (and presumably, next year, high-end desktops). Their kernel (XNU) scales between all of those as well. But even further down, to their Lightning-to-HDMI adapter.
It's a very interesting approach that most probably shouldn't ape; maintaining all that is an expensive engineering task. The benefit, though? They're in charge of every design choice in the stack. The kernel, file system, development tools, etc. are where they want them to be.
It does mirror the Ballmer's 2000s-era "Windows Everywhere" ethos of Microsoft, slightly, but goes further. Microsoft didn't have their own chip design, and Microsoft's embedded OS family, CE1, differed significantly from their desktop OS family, NT, until the former was dropped ca. Windows Phone 8 in 2012.
For example, I remember when Office 2000 added a feature that hid infrequently used items from menus, in hopes of making them less cluttered.
Years later, the Ribbon was arguably another attempt to improve discoverability.
The more cynical edge of this topic, of course, is the question of how you keep up revenues of new versions without adding new features.
AnandTech's take on the M1 Pro and Max is out. Perhaps particularly noteworthy is a table on page 3, showing various performance metrics and their respective actual wall power draw. It loses in some benchmarks, and wins in others, but where the M1 Max wins every time is (by factors between 1.74 and 6.5) when you divide the score by the amount of power it took to accomplish it.
It'll be interesting to see what an M2 looks like. Will it be based on the A15's Avalanche/Blizzard cores? If so, it's probably only worth it if they further bump the clock speed. The A15 is about 9% faster at single-threaded benchmarks, but it achieves that largely by increasing the clock from 3 GHz to 3.2 GHz. Remove that from the equation2, and only a 2% gain remains. So, perhaps the M2 will clock at 3.5 GHz to make that worthwhile — or they'll skip these cores altogether.
Rebranded in a frustratingly frequent manner. CE names included Windows Powered, Windows Embedded CE, and Windows Embedded Compact, and then there's a plethora of operating systems that shipped on top of it, including Windows Mobile. Then there's Windows Embedded Standard, which is instead NT-based, and now known as Windows IoT, because buzzwords.↩
The M1 already runs at 3.2 GHz.↩