Bill Nye, Neil DeGrasse Tyson, Richard Dawkins and others discuss the Storytelling of Science, hosted by Arizona State University’s Origins Project.
A few weeks ago there was a massive breach of security in the Yahoo! email service behind the Telecom @xtra.co.nz addresses. according to information supplied by Yahoo! in a press release up to 20% of 400,000 active email accounts had been compromised.
Telecom employees worked hard day and night to manage the situation. Most of the action needed from consumers of this service involved password reset. This caused lots of trouble to people who weren’t able to access their accounts from email clients or third party services.
After the event a review was launched by Telecom New Zealand: ““We share the frustration that our customers have been experiencing over recent months. We fully appreciate that repeatedly saying ‘sorry’ doesn’t cut it anymore. We are committed to taking a close, hard look at the best way to meet our customers’ email needs.”
Almost a month after Telecom announced they decided to stick with Yahoo! as the email provider for its consumer ISP service: “Telecom New Zealand announced today that it will continue to offer its Yahoo! Xtra email service with Yahoo as its email provider, after receiving strong feedback from customers around the high value they place on it and obtaining a commitment from Yahoo! that it would work with Telecom to improve the customer experience of the service.”
It took Yahoo! a week to acknowledge something was wrong:
Yahoo! is continuing to work with Telecom to ensure Yahoo! Xtra mail accounts that were compromised last weekend have been secured and its in-depth investigation into the circumstances surrounding this issue is on-going.
“There is a lot of misinformation around what may have caused this vulnerability in the Yahoo! email product and the type of information that may have been compromised. There is currently no evidence to support reports that access has been gained to any user information beyond the customer's email address book or that this issue is related to any issues overseas, although we continue to investigate this,” say Laura Maxwell-Hansen, GM Yahoo! New Zealand.
A “lot of misinformation” said Yahoo! so I asked the PR person if they could clarify exactly what happened, so that we could post the correct information and the reply was “It’s not appropriate to disclose that information as these details could be misused and may assist a hacker in the future.”
Either they were not sure what cause the problem in first place or there was no fix being released soon. Otherwise how could disclosing it “assist a hacker in the future”? Obviously we don’t know for sure because of all this security by obscurity.
Guess what? Almost three weeks after the events, and just a week after Telecom’s decision to stick with Yahoo! as its email provider it seems the @xtra.co.nz email service has been compromised again. This is from their network status page:
UPDATE: Here is what the Inbox folder of a compromised mailbox looks like when the account sends spam out and starts receiving bounces from servers reporting invalid addresses… Just look at the frequency of spam being sent:
After seeing a couple of my tweets about analytics and performance the folks at Pingdom asked me a few questions to put together a blog about Geekzone performance. How we maintain the site, how we collect data (including real user monitoring and analytics) and what makes the site run.
You can see some interesting information about browser usage and speeds in our State of Browsers on Geekzone March 2013.
We have been using the Pingdom RUM service pretty much from the start of the beta, released first week of January and should be out of beta soon.
I just had the most strange experience after installing Office 2013 on my laptop… Someone pointed out to me that an Anniversary date showing in my Windows Phone calendar for a contact was wrong by a couple of days, so I went into my Outlook on the laptop to update the information.
It took me almost 24 hours of talking a Microsoft friend how to actually access the Anniversary field in an existing contact. I will show you why…
Let’s create new contact, which uses a “long form” dialog like previous Outlook versions:
After saved I find my contact in Outlook and this is what I see:
Double-click the contact in the list and we now get a different, “metro short form” dialog:
Nowhere to edit the Anniversary. Surely if I entered that information in my contact card then I should be able to edit it?
I asked my Microsoft friend and he kept telling me on Twitter to edit the contact and click “Details”. But there’s no details here. Someone else then suggested I change the Contacts view from “People” to “Card” before opening the contact.
Changed from People to Card view and that’s how the list looks now:
When I double-click the contact entry to edit I then get the old style “long form” edit form, the same one used to create the contact in first place:
And indeed if I click Details I get to the information I want to change, which is what I’ve done for the last fifteen years:
So let’s check this:
- Regardless if you are in People or Card view, you always get the “long form” when creating a new contact
- If you are in People view you get the all hipster “metro short form” when trying to edit a contact
- If you want to actually, you know, edit the contact with all the information in the “long form” then you have to switch to Card view before opening the contact to edit because there’s now switch before “short” and “long” forms once you start editing it.
Why didn’t I think of this? Perhaps because it’s not intuitive enough? Perhaps because it’s confusing? Even the Microsoft guy forgot to tell me that I had to switch between views. He kept telling me on Twitter “People, Contact, Details” and never mentioned that I had to switch views.
A friend on Twitter came on Microsoft’s defence saying that any upgrade project would require training, etc. But I don’t buy that. Do people seriously think John Doe walking into Dick Smith to buy a copy of Office 2013 to install on their new PC (or buying a new PC with pre-installed Office 2013) won’t get confused? Do people seriously think John Doe will buy a new PC and training to use at home? I got confused, and I use this every day, I can imagine what non-tech users will think.
Here is an interesting insight on why the so much publicised closing down of Google Reader is probably going to affect the web as we know it. According to the blog Google Operating System post "Google Reader Data Points" the CNN RSS feed has 24 million subscribers on Google Reader. The second most subscribed feed is Engadget with 6.6 million subscribers. JoelOnSoftware has 148,000 subscribers.
As for our little Geekzone, here is our most recent stat from Feedburner (another Google service) showing we have 176,049 subscribers via Google Reader, out of 177,299.
That's 99% of our RSS subscriber base disappearing on 1st July. I imagine other blogs and news sites around the world will see a similar number.
In a single stroke Google is wiping a whole lot of information consumers from these publisher's stats.
Some might say "oh, but people can use Google+ Circles, Twitter feeds, Facebook pages, LinkedIn Groups and so on to distribute this same information around".
Sure, this could be done sure but publishers would struggle to get the same number of subscribers on those platforms. And the amount of work (and money) required to reach readers on a diverse set of platforms, each with its own problems, would be huge.
I don't think Google is doing a great service to the web when they announce the Google Reader death.
It's almost one year since I've posted the last State of Browser on Geekzone March 2012, so it's time for the annual update. These charts are based on more than 600,000 visits (and a couple million page views) to Geekzone during the 30 day period ending 12th March 2013.
Since last year Chrome usage on Geekzone went up from 30% to 40%. Both Firefox and Internet Explorer had a very small dip, and Safari a very small gain. Firefox went down from 28% to 24% and Internet Explorer from 26% to 23%.
Now for New Zealand-specific numbers. Chrome went up from 30% to 39%, Internet Explorer is the second most used browser with 24% (down from 29%) and Firefox is the third most used browser with 22% (down from 27%).
Similar to the last update, here are two charts showing New Zealand only browser usage split in business hours and after hours:
And here is the split of Internet Explorer versions. Internet Explorer 9 is getting strong going up from 43% to 50%, but Internet Explorer 10 is gaining traction going up from 0% to 10% in the last twelve months. Internet Explorer Explorer 6 practically disappeared with only a couple of hundred visits to the site out of almost 70,000 Internet Explorer visits in total. Internet Explorer had a big dip from 43% to 30% but Internet Explorer 7 remains around the same number:
In the last few months I have been running some RUM services on Geekzone, including Torbit and a beta version of Pingdom RUM. So here is for the first time the fastest and slowest of the top three browsers on Geekzone. About 30% of Geekzone visitors use ad blockers, and I'd expect Firefox and Chrome users to use ad blockers more than Internet Explorer users. These numbers could be a bit shifted to those two browsers:
I hope you found this post informative. If you want some more specific measurements please post in the comments.
UPDATE: As requested, here is an OS distribution:
Over the last couple of months we started seeing a trend in our Geekzone forums: more and more people who bought Samsung Galaxy SIII smartphones being affected by the Samsung Sudden Death. But the worrying part of this trend was really the number of times people reported their handsets coming back from the repair service with a "no warranty repair" tag, saying the user must have tampered with the ROM on the phone.
The Samsung Galaxy SIII Sudden Death is a well known problem and Samsung is quiet about it. Basically you are using your phone and with no reason at all it freezes. You can't turn it off, you can't do anything except take the battery out.
The boot might show something like this:
CUSTOM BINARY DOWNLOAD: No
CURRENT BINARY: Samsung Official
SYSTEM STATUS: Custom
The "SYSTEM STATUS" shows customs because the NAND memory is corrupt and it can't read the product name or system partition therefore it defaults to 'SYSTEM STATUS: Custom'.
According to a discussion on XDA:
The following ROMs include a Kernel, bootloader and recovery with the Update 7 "Fixes" applied. If you have one of these, officially consider yourself "Safe". If you rooted one of the below stock ROMs, you will also be safe, however - if you changed to a custom kernel or recovery, you need to look at the below custom sections. If you have never rooted, the stock section is all you need read.
As It appears all 4.1.2 kernels have the fixes, there is no longer a need to test them all. See below for a list of tested kernels that have the fix. All kernels subsequent to these will also be regarded as safe.
People who have never rooted need not read any further. Essentially, if you have an official, never rooted 4.1.2 ROM, you're "safe"
You can check your handset by running eMMC Brickbug Check. At this moment there isn't firm information if this is really just a firmware issue or a hardware problem with some batches of the memory used on these handsets.
We read reports of people on Geekzone saying that repair services denied warranty repair claiming the phone was modified based on this system status, even if there was no modification at all.
If you get this kind of response, do not settle. Take the handset back to the retailer where you got it from and make sure they understand this is a known fault. You do not have to deal with Samsung as under the Consumer Guarantees Act this is what retailers have to do. Make it clear it's a problem that Samsung is aware of and it must be repaired under warranty.
After waiting for ages we finally get the Windows Phone 8 Portico update in New Zealand. It's said it fixes some freezes and restarts, plus it corrects the SMS date problem.
Surprisingly, once the update was available there was none of that "staggered release" rigmarole. Once Windows Phone 8 Portico is available we just have to check for updates on the phone and it downloads over the air (OTA), ready to install. Install times will depend on how full your phone is, but in mine it took just around 25 minutes.
So, here is my score card for Windows Phone 8 update:
- Easy of install: 10/10
- Download speed: 10/10
- Availability: 3/10
All in all pretty good having it available now and easy to update too. But not good enough. Last week I was driving to town when the Bluetooth speakerphone in the car announced "Connection lost". I looked down and the phone was restarting itself. The Nokia logo showed up and it just stayed there. That logo stayed on the screen for five hours (hey, great battery!) until someone told me about the soft reset procedure. Until then I was thinking "great, just before the update that supposedly fixes these my phone crashes and needs to go away to be flashed".
Luckily the reset worked and the phone seemed ok after that. And today the phone got the update OTA so I feel a bit better about not losing the phone.
As usual manufacturers say Microsoft is the one to be blamed for timing, Microsoft says the mobile operators are the ones who decide if updates can be deployed, and so on. A loop of excuses, where consumers are the ones with no say on when or how.
Basically, as I said before, Microsoft should separate app and UI fixes, new apps from network updates and delivery Windows Phone updates every month, instead of waiting for a twice a year release cycle. It's not like they have the leading mobile platform in the world and can do whatever they want. If they aren't good at this now, I'm sorry, they are toast.
Having said that, they do act like they have the #1 mobile platform in the world, and don't need users to download apps:
I know Apple and Android also do region locking. But they don't have to compete from the last position in the market.
The Windows Phone folks at Microsoft managed to raise my expectations and failed completely to deliver.
I tried for one last time updating two Windows Phone 7 devices I have here (mine and wife's) to the latest 7.8 release and again Zune says both are up to date and no updates are available. This is when Windows Phone 7.8 has been available for almost four weeks here in New Zealand.
Seeing that Microsoft persisted with the crappy experience in the Windows Phone ecosystem - taking this same "managed release" idea to Windows Phone 8, I just removed Zune from my laptop and won't bother updating the old handsets. And when they die we will just replace them with something that, you know. works.
If only they actually treated Windows Phone updates as serious business and delivered these instead of playing around.
As for my Nokia Lumia 920 (the one I use as my day phone), I will continue to use it. If anything happens then this too will be replaced with something else that just works. It might even have something happening by "accident" to this handset if I just get worked up enough.
As part of keeping up with times, this last weekend I finished moving the Hyper-V VMs behind Geekzone to Windows Server 2012. Someone in our forums was curious on how we could have Geekzone running on a single VM instance with no load balancers and so, so he asked me to post what's behind our website, how it changed over the years and what do we do to keep performance up.
We currently serve around 230,000 pages a day (user requests and AJAX request for some pages) plus other resources such as images, scripts and CSS files.
When I started Geekzone it was a domain in a shared host service called Ocoloco, provided by a small Masterton-based company called SiliconBlue. In 2003 Auckland-based ISP ICONZ bought Ocoloco and with that they became our hosting providers. Back then we had a single domain running on IIS, Classic ASP and a Microsoft Access database. We were serving 10,000 pages a month after a few months and that was BIG.
Our first project was to move from Microsoft Access to Microsoft SQL, still in the shared environment. We know Microsoft Access doesn't scale well, but back then we never thought we'd be serving more than 10,000 pages a month.
This worked out well until we got big enough that we had to sometimes call our provider and ask them to restart their SQL server two or three times a day, due to the server crashing under our load. ICONZ suggested we should really get our own server (back then virtual environments weren't a big thing).
We bought our first server from ICONZ, an Acer server with 3GB RAM. We installed Windows Server 2003 and Microsoft SQL. An entire server just for us! It worked fine for a few years until we got to the point where our requirements were really pushing the limits of that 32 bit hardware.
HP came into play and we were supplied with a HP Proliant DL360 server (like the one in the picture above) with 10GB RAM. Loaded with Windows Server 2008 and Hyper-V we had enough to run a VM for Geekzone (IIS/SQL database), a test VM and a monitoring VM.
That's when I started getting serious about performance. While many companies solve their performance problems by installing more hardware we tried to use more of the resources we had available. The monitoring VM runs SQL Sentry and SQL Monitor for database monitoring, cache plan testing and other management tasks. I spent a lot of time optimizing indexes, working the database model and so on.
At this time I also decided to move from a single IIS worker model to a multiple workers (IIS web garden). To get to this point I had to write our session management routines using the SQL database to allow for persistence between the odd server restart (we do restart servers after applying the monthly patches released by Microsoft every second Tuesday of the month) and to allow session to persist between IIS workers. I also worked with Redjungle's Phil to have separated email notification delivery from the web application, as well creating a metaweblog API for our blogging platform and a couple of .Net MVC web sites (Geekzone Mobile and Geekzone Jobs).
Another advantage of this approach is the ability to scale out - and it does work well as I found out when migrating our applications from the old Windows Server 2008 VM to the new Windows Server 2012 VM. I was able to move web applications one at a time and sessions worked across different hosts, sharing the database across a Hyper-V private network.
Around the time we started playing with performance I got to meet the folks at Aptimize, now Riverbed Aptimizer. Aptimize was a Wellington-based company until Riverbed acquired them in 2011. The software works automatically, examining all pages served from our servers and applying rules that determine how to optimize web pages for best client performance. This includes image sprite creation, script and CSS minification, URL rewrite for CDN resources, lazy loading images, loading async scripts and so on. We start using Aptimizer and it improved page speed almost instantly so we had time to put a lot of effort into the database side of things, to get everything a step further.
Around 2009 we decided to move our server from ICONZ, mainly due to colocation and traffic costs. We know 60% of our traffic is New Zealand-based, and of those 75% is from Auckland alone, so when the time came for us to move hosting companies we examined a few companies around Auckland and decided to go with Datacom. They were really good at putting together a package for our small one man operation. And so one day we unplugged the server at ICONZ, loaded it into Nate's car and drove across Auckland to its new home. The Datacom datacenter is so huge that I am pretty sure i might not ever see the server again.
The Datacom move was really good, with improved bandwidth giving our users even faster access to our website. But we know a lot of people access Geekzone from outside New Zealand so we started using a CDN to distribute the heavy resources around the world. Initially with MaxCDN (their prices are really good) and lately with Cloudflare. There are two reasons we moved to Cloudflare: they have a POP in Sydney, which is pretty close to New Zealand, so we could move to them with low impact to our users and their Pro plans support SSL for the CDN - which was a problem for us before (we used to have different CDN rewrites for SSL and non-SSL pages, now we have only one).
We do not use Cloudflare for page optimization because that would add unnecessary round trips for the majority of ours users. But using Aptimizer together with Cloudflare for CDN we can get our resources closer to users, manage the cache expire in their browsers and in the ISP's proxies making all faster than ever.
Since then we increased memory on the server to 24GB to allow for better memory management as well. And while our Windows Server 2008 was working perfectly well, I decided to move to Windows Server 2012 for a few reasons but mainly because of a faster OS startup, OS support for NIC teaming, and Hyper-V Dynamic Memory. And also because this is Geekzone so why not then?
So that's it. A bit of geek history and things I've done the last few years. More to come (and if you need more information or some help with your current setup, contact me and we can have a chat).