Last weekend a press release landed in my inbox, and I thought it interesting enough to make me contact the agency and get more information about the product. In summary ScaleArc iDB promised to scale your database without changes in code or database itself:
ScaleArc, the pioneer in a new category of database infrastructure that accelerates application development by simplifying the way database environments are deployed and managed, today announced general availability of iDB v2.0 for Microsoft SQL Server that brings significant new capabilities to SQL Server environments such as instant horizontal scaling, higher availability, faster database performance, increased SQL protection and real-time query analytics. iDB takes a fundamentally different approach at the SQL protocol layer by providing customers with a wide spectrum of capabilities for their database environment in a single solution, without requiring any modifications to existing applications or SQL Server databases.
Until now, moving to advanced architectures like multi-master, or achieving instant scale and better performance within SQL server environments, has been costly and extremely difficult to implement. iDB v2.0 for MS SQL supports a wide range of functions including Read/Write splitting, dynamic load balancing and horizontal scaling, query caching for up to 24x faster query responses, wire-speed SQL filtering and real-time instrumentation and analytics to enhance all deployment modes of SQL server, including SQL Server Clustering, SQL mirroring, Peer-to-Peer (P2P) Replication and log shipping.
iDB for MS SQL Feature Highlights
. Dynamic Query Load Balancing for High Availability: ScaleArc iDB implements a specialized dynamic load-balancing algorithm that allows the most efficient utilization of available database capacity, even when servers have varying capacity. iDB monitors query responses in real-time and can load balance queries to the server that will provide the fastest response to properly distribute the load. Up to 40% better performance has been observed with iDB's dynamic load balancing relative to TCP-based load balancing.
. Pattern-Based Query Caching for Increased Performance: ScaleArc iDB allows users to cache query responses with one-click. No changes are required at the database server or in the application code; the query is cached at the SQL protocol level, providing up to 24x acceleration without any modifications.
. Multi-master: iDB supports multi-master and master-slave scenarios to ensure high availability and scalability. Specific queries, irrespective of their origin, are routed to the right server with the advance query routing engine that also simplifies sharding.
. Real-time Analytics: Advanced graphical analysis tools provided by ScaleArc iDB bring comprehensive real-time awareness of all queries, helping to quickly pinpoint query patterns that are not performing optimally and allowing more precise management.
. Wire Speed SQL filtering: iDB is able to enforce query-level policies for security or compliance reasons to protect against attacks, theft and other threats. iDB can operate outside of the application where policies have not traditionally been easily enforced.
. SQL Query Surge Queue: Extreme loads can lead to unacceptable response times or even halting of operations until the load reduces, leading to "Database not Available" errors. ScaleArc iDB allows a more graceful response to peak loads. When faced with an extreme load, ScaleArc iDB can initiate a SQL Query Surge Queue and momentarily queue queries in a FIFO queue and process them once server resources become available.
Obviously I was a bit worried with their claims, so asked a couple of questions. Here are the answers:
What happens to cached query results when the result changes? For example a record is updated - will the next query use previous results, or get new results?
The key to iDB lies in our Analytics. We provide granular real-time data on all SQL queries flowing between application servers and the database servers. As such, customer now have the intelligence they need to understand the query structure, the frequency it hits the database, the amount of server resources it takes, etc. We then give the customer the power to cache on a per query basis, but we do not set a Time-To-Live for the customer. They need to understand how often the query will be updated, and ensure they do not set a Time-To-Live that may serve stale data is an Update comes in from the Application. We allow customers to set TTL anywhere from 1 second to multiple years. When a cache rule for a query is activated with a single click of a button, we immediately measure the performance and offload impact of the cache. And since our cache on iDB is a hash map that caches the TCP output of Read queries, subsequent Read queries served from our cache are served up to 24x faster (or more).
ScaleArc also has API that can be invoked from the application to add, invalidate and bypass the cache for specific SQL statements
How much more memory does it require? Or does it use the SQL DB footprint?
ScaleArc iDB is a Network appliance like deployment and does not have any agents on the Server or the Application.This would mean that iDB has its own physical/virtual machine to perform its operations. iDB can run load balancing within 4GB of memory, however for caching and logging purposes iDB can address up to 128GB of memory.
iDB is a separate instance from the database. Most customers run our software on a dedicated x86 server to make it a dedicated appliance. We also sell appliances, or iDB can be installed on a hypervisor as a Virtual Machine. iDB does not require a lot of memory to operate, but we can allocate up to 128GB of RAM for caching of READ queries. Query logs are stored on drives on the appliance.
Very interesting - an appliance for SQL TCP output caching. Ok, I have entered my name in to get a 30 day trial and see how much difference it can actually make.
UPDATE: Someone on Facebook said this was advertising. IT IS NOT. I was not asked to post about it, and did not receive any payment to post about it. If you are so inclined please read my FULL DISCLOSURE post.
Learn how the Riverbed performance platform can help you up your IT game
With the growth of virtualization, consolidation, and cloud computing have come new challenges. IT is increasingly consolidated and virtualized while workers and consumers are distributed. How best to harness these approaches and deliver the efficiency and control your organization requires, while ensuring that end users get the performance they need?
Attend the Riverbed Performance Summit to find out how Riverbed empowers enterprises like yours with the tools to analyze, accelerate, and control your IT. Stay on top of the latest technology and solutions from Riverbed and join us for a deep dive into our vision for delivering performance for the globally connected enterprise.
Sign up to connect with Riverbed technology experts and your peers to learn how you can get more out of your Riverbed investment.
At this exclusive event you'll hear firsthand from our experts on how to maximize your Riverbed investment with the latest release of cutting-edge performance platform products and solutions:
- Granite, our revolutionary new product for consolidating edge servers in the data center
- Getting the most out of the latest release of RiOS (7.0), including optimization for video, UDP, IPv6, and VDI
- Steelhead Cloud Accelerator, a new powerful solution for boosting the performance of SaaS applications
- The latest product updates, technical overviews, demos, and more
Register now and discover how to make the Riverbed performance platform work for you. Find out how you can finally consolidate your entire infrastructure, including edge applications, servers and storage to the data center, all without compromising performance.
A shame I won't be attending this event since it falls on the same week I will be in Las Vegas for the HP Discover 2012.
I am attending HP Discover 2012 Las Vegas (4th - 7th June). This is the second time I am invited to this event, with other bloggers from around the world - this time Ben Kepes will be another Kiwi blogger joining me to hear from HP's management (he blogs about cloud, infrastructure and more at diversity.net.nz).
In my other blog at DiscoveringHP.com I have posted "11 Reasons to Attend HP Discover 2012 in Las Vegas".
Have a look at the HP Discover 2012 program for this year's conference and if you (or your company) are planning to attend, use the discount code "BLOG" for US$300 off during registration.
Disclosure: I am attending the conference as a guest, and HP is sponsoring my trip.
Last week of April I got a phone call from Snapper. They were getting ready to deliver a New Zealand first technology in partnership with 2degrees, a NFC-based mobile payment solution with practical use: Snapper on your smartphone, branded 2degrees Touch2Pay, and wanted to reach the Geekzone community to show this new way of paying for things.
Wellingtonians in general have embraced Snapper with impressive numbers (although Snapper is not limited to Wellington only). Over 370,000 Snapper cards have been issued, generating more than 100 million transactions across over 1000 buses, 3000 taxis, and over 500 retailers in New Zealand.
In just under a week we contacted Geekzone users in Wellington inviting them to a mystery Q&A event. We had a great response and the 80 seats available were filled very fast. On the day we had 90% presence and people enjoyed drinks and nibbles while listening to Snapper explaining why they decided to do it and what steps were taken to design the Android app, LG told the audience the dirty tech bits behind the NFC technology and 2degrees showed how all this integrated into the mobile network (including SIM authentication/authorisation).
We also had a couple of LG Android smartphones ready to go with some preloaded credit to giveaway.
In the few days leading to the event we started a discussion where I asked people what they thought would be announced. That discussion ended up having 200 replies with 15,000 views . While some correctly guessed a NFC-based mobile payment solution, others provided great feedback with their wishful thinking on new products/services. I was told by 2degrees, LG and Snapper they were following the discussion closely. We now closed that discussion and opened a new one to discuss the Touch2Pay service.
I wish other tech companies in New Zealand reached out to our community like Snapper, 2degrees and LG did. It was fun and we had high attendance of interested people... You know how to contact me!
Apparently the problem with Skydrive content not being accessible from Windows Phone devices yesterday is now solved. It looks like for a window of six to eight hours (or even a bit more) Microsoft had some bad redirects that caused attempts to access documents stored on Skydrive to fail.
This only happened when accessing Skydrive from the dedicated Windows Phone app, but it worked fine from the Windows Phone Office Hub.
I have looked at both Skydrive and Google Drive aps and found that they worked practically the same when it comes to managing the files and transfers. However Skydrive gives me 25GB instead of Google Drive's 5GB and the Windows app gives me remote access to all drives of my desktop - very handy if I'm out and about and need anything that's not on Skydrive (music, videos, etc). Also because Google Drive's T&Cs are a "cloudy" business, with words that say customers grant use rights to all content uploaded so Google can use all that for their "product development".
Reading through some Geekzone discussions I've noticed people still don't know about some other cloud solutions and services, so here is a comprehensive comparison of a cloud-based storage and synchronization solutions for the consumer market.
Sorry to break it people, but it doesn't work like this. It seems you can only download images and sounds from Skydrive to your Windows Phone device. All those .docx, .xlxs, .pptx and .txt files you have? Forget about it. They won't download.
*sigh* Why is so hard for software companies to make Things That Just Work (TM)?
UPDATE: after six hours it's working now. Since I know of more people having this same problem, could it be that Microsoft's cloud service was overloaded? Or simply my 5GB uploads had to go through some process to be readable? Who knows. I'm sure moving to the cloud should be more assuring than this though.
At the end of the day, what you want is a faster loading web site that will help your company achieve an objective.
For example, when I started working to make Geekzone a faster web site, our metrics included reduce web page load time, increase number of repeat visitors, increased time spent on site and increased number of page views - we don't sell a "product", we sell advertising after all so those were the important metrics for us.
Using tools like WebPageTest allowed us to measure the time a web page takes to load in different parts of the world. Even though 40% - 45% of our traffic is New Zealand-based, we still have a large number of visitors coming from overseas (including the United States, Australia, Canada, the United Kingdom and India).
A couple of years ago our average web page load time was around 10 seconds for a visitor coming from the US. By following through with changes in our database, backend scripts, hosting provider, CDN we managed to reduce the web page load time to around 6.5 seconds on average when measured from Dulles, VA.
With automatic web optimization software (in our case Riverbed Stingray Aptimizer) we managed to reduce the time even further to 4.5 s as you can see in the image below, captured from a WebPageTest run earlier today:
If you are in New Zealand our web page load times are even lower, on average 1.5 seconds for a complete page to be ready to be used.
In another post I will talk about each of the items we touched when improving performance on Geekzone - make sure to subscribe to my RSS feed. Of course if you run a web site and think a Web Performance Optimization project could help you improve metrics, please contact me and we can work on this.
Continuing my series of posts about Web Performance Optimization (WPO), here is another thought: use a Content Delivery Networks (CDN) to speed up web pages and save money.
Even though bits travel fast, it all comes down to distance and number of bits. The closer you are to your users, the faster your web pages will load. That's where CDNAs help us, web site owners. While a robust web site might have geographically distributed content servers for performance and redundancy, maintaining this infrastructure comes at a cost.
CDNs provide a balanced distribution platform that allows content providers to store resources closer to their clients, making everything a bit faster. Here at Geekzone we currently use MaxCDN, but played with Fastly and Amazon Cloudfront. We currently have mixed DNS and CDN solution (which I will expand on in another post).
CDNs can be used in many different ways. The most common are Push and Pull. With Push CDNs you are responsible for loading your web resources to their servers, while Pull CDNs will automatically retrieve your web resources from a nominated origin server when a request first comes in.
Below is the stats panel for one of our CDN configurations with MaxCDN, where you can see how the content is distributed through the nodes and how much data is used up every day:
And below you can see the traffic (in number of hits) including cache hits and non-cache hits:
Coming from New Zealand, where data traffic is usually one of the highest costs in a web site operation, CDNs have the side effect of helping web site owners save on traffic. You can see that our CDN serves something between 400 MB and 1.2 GB a day, depending on traffic, with 90% cache hits. This means 90% of the requests are served from the CDN caches directly, without ever reaching our servers.
CDN configuration can be as simple as just creating new DNS records pointing a resource domain to the CDN subdomain created for your specific configuration. If your web site doesn't currently use a separate domain for serving up those resources (images, scripts, CSS, static HTML) there are solutions that can automatically rewrite those when a page is requested.
When using a CDN it's important to make sure your web resources are correctly configured to appropriate cache expire and public caching. If this is not possible to configure in your server, there's always a setting on the CDN that will allow you to override settings from the original server with new default values.
In another post I will talk about latency - make sure to subscribe to my RSS feed. Of course if you run a web site and think a Web Performance Optimization project could help you improve metrics, please contact me and we can work on this.
Continuing my series of posts about Web Performance Optimization (WPO), here is a thought: focus on high impact web pages first. This might seem obvious when you read it, but from my experience most people don't actually put limits to a WPO project and over time the benefits are diluted.
The first thing to do is to identify possible candidates to a WPO project. In a previous project we found out one single script was hit with requests 80% of the time. We (the web site owner and myself) decided to concentrate efforts on this web page first.
Basically, we apply the Pareto Principle and concentrate our efforts on that page responsible for 80% of the total requests using only 20% of the overall time of a full WPO project, with more immediate results. We then have time to concentrate on the other 20% of pages which could take up to 80% of the project time, if needed.
Obviously if you have a page that is hit only a few times a day but still manages to bring the whole web site down, then this should be looked at too.
The tools of choice for this part of the project are web site analytics (Google Analytics is my favourite one - it's free!). Data needs to be collected for a while to help determine the exact focus of the sub project.
Once a web page is selected then a holistic approach takes place. Waterfall diagrams (I will talk about these in another post later) can be used to determine the balance of back end and browser side load times, helping determine which side needs more urgent attention. Scripts can be used to monitor events and report back with signals that can be used to determine specific areas causing slow rendering on the client side.
I will keep posting in this series - make sure to subscribe to my RSS feed. Of course if you run a web site and think a Web Performance Optimization project could help you improve metrics, please contact me and we can work on this.