• Explore the magic and the mystery!



  • Silly Comparisons and Other Nonsense

    September 8th, 2015

    Once upon a time, when Apple was installing rofowerPC chips in Macs, there came a time when Intel appeared to be soaring ahead in the CPU horsepower race. Do you remember when Pentiums began to approach 4 GHz, while the PowerPC G5 topped out at 2.5 GHz, and that required liquid cooling? So, in theory, the Pentium must have been a whole lot faster.

    In the real world, a processor’s speed potential is a lot more complicated, and don’t forget the speed of the memory bus and the hard drive. An SSD on a lower speed Mac nowadays can do wonders to make it seem a whole lot snappier. Ask any user of the new MacBook.

    In any case, when Apple was still building Macs with PowerPC, and the advertised clock speeds of Pentiums soared, there were regular performance bakeoffs, usually during Apple’s public events. The result? Macs were still faster at many tasks. Some claimed the benchmark methodology was deliberately fabricated to make the Mac look faster than it really was. Perhaps Apple was selective in picking the benchmarks that put the PowerPC to its best advantage, but they still represented actual things you’d do with high-end software on your Mac.

    Indeed, Apple shared the methodology, and the actual test scripts, with journalists, and when I used them, I got results quite similar to theirs.

    But development of the PowerPC stalled. Apple’s chip partners, IBM and Motorola, were never able to tame the G5 to work efficiently on a note-book, so PowerBooks used the much slower G4. Steve Jobs realized something had to give, and that meant giving up the PowerPC and moving to Intel. The transition was was first announced in 2005, and completed in 2006.

    Today, the issue of Mac performance is not a factor. In most respects, except for dedicated gaming machines or PCs with the most powerful graphics hardware, Macs and PCs are pretty close with most benchmarks, and the former is often more power efficient. The real comparisons are about hardware design, operating system capabilities, and the app availability.

    Moving on…

    With new iPhones on the agenda this week, there are the inevitable comparisons with existing gear. How well does the latest flagship smartphone from Samsung and, to a lesser extent, HTC and LG, stack up with last year’s iPhone? These are the comparisons that don’t actually use real-world results but specs.

    So it’s inevitable that the presumed or measured specs of an iPhone will seem to pale in comparison to what the fastest Android gear offers. Why have two cores when you can have four or more? Aren’t the clock speeds faster on Android gear, not to mention the amount of built-in RAM? What about the added frills, such as displays that extend to the sides? Isn’t that an advantage? Well, I suppose if you want to stare at the edge of the phone and imagine that somehow improves your user experience.

    As to specs: Remember we are talking about different operating systems, and processors that, in Apple’s case, are customized and optimized for specific hardware and software. Real benchmarks tend to show Apple at or near the top of the food chain. But there’s more to judging performance. What about the user interface and the basics, such as app launch times, and navigating through an interface? When or where do things lag and is everything pretty snappy? Here a few fractions of a second don’t matter regardless.

    Apple routinely gets dinged for failing to match the competition’s hardware feature-for-feature, or via specs. But claiming brownie points because there’s something an iPhone doesn’t have misses the point. Is the feature needed? And if it is, does it even work? Do you remember the Tilt to Scroll capability once highly touted on Samsung Galaxy smartphones? How well did that feature really work? Did the content scroll quickly without pause? Did it stall, or sometimes fail to run?

    Well, I seldom got it to work except during the setup process. In the real world, it was an abject failure, but adding bullet points appear to be more important than actually perfecting a feature.

    Apple’s approach has occasionally been explained by VP Philip Schiller. It’s not just about figuring out what features to add, but what features to remove. Sometimes Apple makes a wrong move, and sometimes a new feature doesn’t work as well as it should. The first iterations of the Touch ID fingerprint sensor weren’t always reliable. But it did get better overtime, and the hardware has been improved as well. In contrast, Samsung’s early efforts in recent Galaxy handsets were very hit or miss (mostly miss), but having a fingerprint sensor was, to them, a bragging point and it didn’t matter.

    When the new iPhones go on sale, there will be the inevitable benchmarks. Even though Apple seldom offers very much information about the specs of an A-series processor, the benchmarking apps will provide that information. Again, there will be inevitable feature and spec comparisons. But how it all comes together and works in the real world is the issue, even if such fine details are more difficult to quantify when you create lists in Keynote or PowerPoint.



    Share
    | Print This Post Print This Post

    2 Responses to “Silly Comparisons and Other Nonsense”

    1. Phil Robins says:

      Great column! A question…

      Is there any real world use benchmark? Not sure what it would include–perhaps taking a photo, checking email, browsing to this site, reading a couple of stories, playing some podcasts or music, etc.

      That would be a whole lot more informative to the normal user than bizarre numbers in those Geek whatevers…

      And would more closely mirror real world experience!

    Leave Your Comment