• Explore the magic and the mystery!


  • Listen to The Tech Night Owl LIVE

    Last Episode — August 24: Gene presents a regular, tech podcaster and commentator Kirk McElhearn , who comes aboard to talk about the impact of the outbreak of data hacks and ways to protect your stuff with strong passwords. He’ll also provide a common sense if unsuspected tip in setting one up. Also on the agenda, rumors about the next Mac mini from Apple. Will it, as rumored, be a visual clone of the Apple TV, and what are he limitations of such a form factor? As a sci-fi and fantasy fan, Kirk will also talk about some of his favorite stories and more. In is regular life, Kirk is a lapsed New Yorker living in Shakespeare’s home town, Stratford-upon-Avon, in the United Kingdom. He writes about things, records podcasts, makes photos, practices zen, and cohabits with cats. He’s an amateur photographer, and shoots with Leica cameras and iPhones. His writings include regular contributions to The Mac Security Blog , The Literature & Latte Blog, and TidBITS, and he has written for Popular Photography, MusicWeb International, as well as several other web sites and magazines. Kirk has also written more than two dozen books and documentation for dozens of popular Mac apps, as well as press releases, web content, reports, white papers, and more.

    For more episodes, click here to visit the show’s home page.

    Should You Believe Even Positive News About Apple?

    June 3rd, 2018

    It wasn’t so many months ago when there were loads of reports that Apple’s great experiment, the iPhone X, was a huge failure. Inventories were growing, there were major cutbacks in production. All this allegedly based on reports from the supply chain.

    Such blatant examples of fake news aren’t new. It happens almost every winter. After a December quarter and peak sales, Apple routinely cuts back on production from the March quarter. It’s not the only company to follow such seasonal trends, but somehow Apple gets the lion’s share of the attention.

    From time to time, Tim Cook schools the media about relying on a few supply chain metrics, reminding them that, even if true, it doesn’t necessarily provide a full picture of supply and demand.

    He might as well be talking to himself since he’s almost always ignored.

    In any case, the numbers from the December and March quarters painted a decidedly different picture than those rumors depicted. The iPhone X was the number one best selling smartphone on Planet Earth for every week it was on sale. I don’t know if the trend has continued, but Apple has nothing to apologize for.

    Now one of the memes presented in the days preceding the arrival of the iPhone X — before the talk about its non-existent failure arose — was that it would fuel a super upgrade cycle. Up till then, the usual two-year replacement scenario was beginning to fade. In part this was due to the end of the subsidized cell phone contract in the U.S. fueled by T-Mobile’s supposedly innovative “Uncarrier” plans. They appeared to represent something different, but at the end of the day, not so different in what you had to pay, at least for the term of your smartphone purchase.

    Originally, you’d acquire a cell phone either by buying the unit outright, or signing up for a two-year contract in which you’d pay something — or nothing — upfront and then be obligated to keep the service in force for at least two years. If you cancelled early, you’d pay a penalty to cover what the carrier presumably lost because you didn’t pay off the device.

    After two years, you’d be able to cancel your contract without penalty, but if you kept it in force, the price wouldn’t change even though the device had been paid off. It was a boon to the carrier if you didn’t upgrade. But if you did, the two-year requirement would start all over again.

    With an “Uncarrier” deal, the cell phone purchase was separated from your wireless service. You could buy it, add one you own to the service if it was compatible, or acquire a new handset for an upfront payment, plus a given amount every month until it was paid off. It was essentially a no-interest loan, but you’d have the right to exchange it for a new device after a certain amount of time, usually 12 to 18 months. This way, the purchase became a lease, and you’d never own anything. In exchange for getting new hardware on a regular basis, you’d never stop paying.

    What it also meant is that, once your device was paid off, the price would go down, giving you an incentive to keep your hardware longer if it continued to perform to your expectations.

    Now that predicted iPhone super upgrade cycle didn’t occur as predicted. Yes, iPhone sales did increase a tiny bit in the last quarter, but revenue has soared because the iPhone X dominated new purchases, thus boosting the average transaction price. That, too, was contrary to all those predictions that Apple’s most expensive smartphone was way overpriced, and customers were reacting negatively.

    How dare Apple charge $999 and up for a new handset?

    Rarely mentioned was the fact that Samsung, Pixel Phone by Google and other mobile handsets makers also sold higher-priced gear, but there were few complaints. It’s not that sales were great shakes, but some regarded such handsets as certain iPhone killers, except that Apple overwhelmed these products in sales.

    So where’s what’s the latest alleged super (duper?) upgrade cycle about?

    Well, according to a published report in AppleInsider, Daniel Ives of GBH Insight claims that “the Street is now starting to fully appreciate the massive iPhone upgrade opportunity on the horizon for the next 12 to 18 months with three new smart phones slated for release.”

    Deja vu all over again?

    For now Apple has become a Wall Street darling. But don’t bet on that continuing. The next time someone finds reasons, real or imagined, to attack Apple’s prospects for success, the stock price will drop again. Of course, there are other reasons for stock prices to vary, including the state of economy, possible trade wars and other reasons, including investor psychology.

    So what is Ives expecting?

    He is projecting that Apple might sell up to 350 million iPhones over a period of 18 months after this fall’s new product introductions. Supposedly they will be so compelling that people who might have otherwise sat on the sidelines and kept their existing gear will rush to upgrade.

    As regular readers might recall, predictions have focused on a new iPhone X and a larger iPhone X Plus, plus a regular iPhone with an edge-to-edge LCD display. Will there be an iPhone 8 refresh, or will Apple just sell last year’s models at a lower price? What about a smaller model, the alleged iPhone SE 2?

    I don’t disbelieve the rumors about the 2018 iPhone lineup, but predictions of super upgrade cycles may not be so credible. People appear to be keeping their smartphones longer, so long as they continue to deliver satisfactory performance. And. no, I won’t even begin to consider the performance throttling non-scandal.


    Newsletter Issue #965: SSSSHHHH: Alexa is Listening

    May 28th, 2018

    Let me start with the Siri follies.

    With growing concern that Apple’s Siri digital assistant isn’t capable of matching the competition from Amazon and Google, there are rumors that the next WWDC will feature news of a major refresh. Last year, Apple touted that Siri would receive a new voice and machine learning, but it’s not at all certain there has been much change beyond a smoother conversational tone.

    A recent published report featured expressions of sour grapes from former Siri employees who worked at Apple, plus a claim that it worked fine when reporters tested it before it went public. But after it was launched, beginning with the iPhone 4s in 2011, Siri’s bugs were legion. Maybe it just couldn’t cope with massed requests under load.

    Continue Reading…


    Consumer Reports’ Product Testing Shortcomings: Part Two

    May 24th, 2018

    In yesterday’s column, I expressed my deep concerns about elements of Consumer Reports’ testing process. It was based on an article from AppleInsider. I eagerly awaited part two, hoping that there would be at least some commentary about the clear shortcomings in the way the magazine evaluates tech gear.

    I also mentioned two apparent editorial glitches I noticed, in which product descriptions and recommendations contained incorrect information. These mistakes were obvious with just casual reading, not careful review. Clearly CR needs to beef up its editorial review process. A publication with its pretensions needs to demonstrate a higher level of accuracy.

    Unfortunately, AppleInsider clearly didn’t catch the poor methodology used to evaluate speaker systems. As you recall, they use a small room, and crowd the tested units together without consideration of placement, or the impact of vibrations and reflections. The speakers should be separated, perhaps by a few feet, and the tests should be blind, so that the listeners aren’t prejudiced by the look or expectations for a particular model.

    CR’s editors claim not to be influenced by appearance, but they are not immune to the effects of human psychology, and the factors that might cause them to give one product a better review than another. Consider, for example, the second part of a blind test, which is level matching. All things being equal, a system a tiny bit louder (a fraction of a dB) might seem to sound better.

    I don’t need to explain why.

    Also, I was shocked that CR’s speaker test panel usually consists of just two people with some sort of unspecified training so they “know” what loudspeakers should sound like. A third person is only brought in if there’s a tie. Indeed calling this a test panel, rather than a couple of testers or a test duo or trio, is downright misleading.

    Besides, such a small sampling doesn’t consider the subjective nature of evaluating loudspeakers. People hear things differently, people have different expectations and preferences. All things being equal, even with blind tests and level matching, a sampling of two or three is still not large enough to get a consensus. A large enough listening panel, with enough participants to reveal a trend, might, but the lack of scientific controls from a magazine that touts accuracy and reliability is very troubling.

    I realize AppleInsider’s reporters, though clearly concerned about the notebook tests, were probably untutored about the way the loudspeakers were evaluated, and the serious flaws that make the results essentially useless.

    Sure, it’s very possible that the smart speakers from Google and Sonos are, in the end, superior to the HomePod. Maybe a proper test with a large enough listener panel and proper setup would reveal such a result. So far as I’m concerned, however, CR’s test process is essentially useless on any system other than those with extreme audio defects, such as excessive bass or treble

    I also wonder just how large and well equipped the other testing departments are. Remember that magazine editorial departments are usually quite small. The consumer publications I wrote for had a handful of people on staff, and mostly relied on freelancers. Having a full-time staff is expensive. Remember that CR carries no ads. Income is mostly from magazine sales, plus the sale of extra publications and services, such as a car pricing service, and reader donations. In addition, CR requires a multimillion dollar budget to buy thousands of products at retail every year.

    Sure, cars will be sold off after use, but even then there is a huge loss due to depreciation. Do they sell their used tech gear and appliances via eBay? Or donate to Goodwill?

    Past the pathetic loudspeaker test process, we have their lame notebook battery tests. The excuse for why they turn off browser caching doesn’t wash. To provide an accurate picture of what sort of battery life consumers should expect under normal use, they should perform tests that don’t require activating obscure menus and/or features that only web developers might use.

    After all, people who buy personal computers will very likely wonder why they aren’t getting the battery life CR achieved. They can’t! At the end of the day, Apple’s tests of MacBook and MacBook Pro battery life, as explained in the fine print at its site, are more representative of what you might achieve. No, not for everyone, but certainly if you follow the steps listed, which do represent reasonable, if not complete, use cases.

    It’s unfortunate that CR has no competition. It’s the only consumer testing magazine in the U.S. that carries no ads, is run by a non-profit corporation, and buys all of the products it tests anonymously via regular retail channels. Its setup conveys the veneer of being incorruptible, and thus more accurate than the tests from other publications.

    It does seem, from the AppleInsider story, that the magazine is sincere about its work, though perhaps somewhat full of itself. If it is truly honest about perfecting its testing processes, however, perhaps it should reach out to professionals in the industries that it covers and refine its methodology. How CR evaluates notebooks and speaker systems raises plenty of cause for concern.


    Some Troubling Information About Consumer Reports’ Product Testing

    May 23rd, 2018

    AppleInsider got the motherlode. After several years of back and forth debates about its testing procedures, Consumer Reports magazine invited the online publication to tour their facilities in New York. On the surface, you’d think the editorial stuff would be putting on their best face to get favorable coverage.

    And maybe they will. AppleInsider has only published the first part of the story, and there are apt to be far more revelations about CR’s test facilities and the potential shortcomings in the next part.

    Now we all know about the concerns: CR finds problems, or potential problems, with Apple gear. Sometimes the story never changes, sometimes it does. But the entire test process may be a matter of concern.

    Let’s take the recent review that pits Apple’s HomePod against a high-end Google Home Max, which sells for $400 and the Sonos One. In this comparison, “Overall the sound of the HomePod was a bit muddy compared with what the Sonos One and Google Home Max delivered.”

    All right, CR is entitled to its preferences and its test procedures, but lets take a brief look at what AppleInsider reveals about them.

    So we all know CR claims to have a test panel that listens to speakers set up in a special room that, from the front at least, comes across as a crowded audio dealer with loads of gear stacked up one against another. Is that the ideal setup for a speaker system that’s designed to adapt itself to a listening room?

    Well, it appears that the vaunted CR tests are little better than what an ordinary subjective high-end audio magazine does, despite the pretensions. The listening room, for example, is small with a couch, and no indication of any special setup in terms of carpeting or wall treatment. Or is it meant to represent a typical listening room? Unfortunately, the article isn’t specific enough about such matters.

    What is clear is that the speakers, the ones being tested and those used for reference, are placed in the open adjacent to one another. There’s no attempt to isolate the speakers to prevent unwanted reflections or vibrations.

    Worse, no attempt is made to perform a blind test, so that a speaker’s brand name, appearance or other factors doesn’t influence a listener’s subjective opinion. For example, a large speaker may seem to sound better than a small one, but not necessarily because of its sonic character. The possibility of prejudice, even unconscious, against one speaker or another, is not considered.

    But what about the listening panel? Are there dozens of people taking turns to give the speakers thorough tests? Not quite. The setup involves a chief speaker tester, one Elias Arias, and one other tester. In other words, the panel consists of just two people, a testing duo, supposedly specially trained as skilled listeners in an unspecified manner, with a third brought in in the event of a tie. But no amount of training can compensate for the lack of blind testing.

    Wouldn’t it be illuminating if the winning speaker still won if you couldn’t identify it? More likely, the results might be very different.  But CR often appears to live in a bubble.

    Speakers are measured in a soundproof room (anechoic chamber). The results reveal a speaker’s raw potential, but it doesn’t provide data as to how it behaves in a normal listening room, where reflections will impact the sound that you hear. Experienced audio testers may also perform the same measurements in the actual listening location, so you can see how a real world set of numbers compares to what the listener actually hears.

    That comparison with the ones from the anechoic chamber might also provide an indication how the listening area impacts those measurements.

    Now none of this means that the HomePod would have seemed less “muddy” if the tests were done blind, or if the systems were isolated from one another to avoid sympathetic vibrations and other side effects. It might have sounded worse, the same, or the results might have been reversed. I also wonder if CR ever bothered to consult with actual loudspeaker designers, such as my old friend Bob Carver, to determine the most accurate testing methods.

    It sure seems that CR comes up with peculiar ways to evaluate products. Consider tests of notebook computers, where they run web sites from a server in the default browser with cache off to test battery life. How does that approach possibly represent how people will use these notebooks in the real world?

    At least CR claims to stay in touch with manufacturers during the test process, so they can be consulted in the event of a problem. That approach succeeded when a preliminary review of the 2016 MacBook Pro revealed inconsistent battery results. It was strictly the result of that outrageous test process.

    So turning off caching in Safari’s usually hidden Develop menu revealed a subtle bug that Apple fixed with a software update. Suddenly a bad review become a very positive review.

    Now I am not going to turn this article into a blanket condemnation of Consumer Reports. I hope there will be more details about testing schemes in the next part, so the flaws —  and the potential benefits — will be revealed.

    In passing, I do hope CR’s lapses are mostly in the tech arena. But I also know that their review of my low-end VW claimed the front bucket seats had poor side bolstering. That turned out to be totally untrue.

    CR’s review of the VIZIO M55-E0 “home theater display” mislabeled the names of the setup menu’s features in its recommendations for optimal picture settings. It also claimed that no printed manual was supplied with the set; this is half true. You do receive two Quick Start Guides in multiple languages. In its favor, most of the picture settings actually deliver decent results.