Why Hybrid Apps Still Can’t Compete with Native


Hybrid apps built with tools like Adobe Air, Cordova, jQuery, Bootstrap, Ionic and Angular, continue to improve. However, these apps still lack the quality required in many situations.

Here’s why:

1. Native UX = High Quality App

For almost every user – regardless of how savvy – native UX means high quality and professional, no matter how well you do Hybrid.

And you can’t fake it because the Android’s and IOS’s keep changing the look of their UI. Today it’s “Material Design” on Android and “Flat Design” on IOS. So until there is a high performance, cross platform UI that becomes universally accepted by users, “native appearing” HTML5 frameworks like Chocolate Chip-UI just won’t cut it.

Furthermore, I have my doubts about cross-platform compilers, such as Titanium and Sencha.  It’s the same type of thing – too many layers leading to loss of control of your development leading to poor user experience.

2. Enjoy working against Google and Apple?

You will also be working against Google and Apple, who kind of support HTML5, but really don’t. For example, read Google’s App Developer Best Practices. According to them, the primary essentials of a “quality app” are to “Make it Android”.

3. Performance way more important 

Watch young people flick and slide their apps around. They don’t want to wait. They want offline support – which is (practically) unsupported in HTML5. Animations and loading of content must be really performant.

4. Personality!

There’s just something about the “closeness” of the app that makes personality more important on mobile. On desktop, work is the defining activity. In the mobile world, entertainment and communication are the primary activities. It’s less business-like; more personal.

People want to see icons and text in perfect resolution when using apps. They want a “delightful”, smooth experience. Anything lesser is quickly noticed as a fraud.

5. Cross-Platform Bugginess

It’s easy to get a basic HTML5 app running on the app store, but the amount of time spent resolving cross-platform issues can be huge.


Below is a talk from a C++ conference explaining how Microsoft ported Office to every imaginable platform.

They tell it like it is – hard work with no magic solution.

Their approach was exactly reversed from the typical Hybrid approach of embedding Javascript and HTML5 into the WebView, however.

Their approach is to always use native UI components, but keep UI code “very thin”. Most code is in C++ so they can re-use from platform to platform.

Perhaps this is how cross-platform will happen in the future.

Posted in Android App Development, Hybrid Apps, IOS App Development, Uncategorized

“If You Don’t Have a Mobile Website, You Don’t Have a Website”


I thought this was an interesting quote from Paul Bakaus, from the Chrome DevTools team. It’s precisely what we’re doing  at DALSASS.mobi. We’re taking the conventional web development company and making it mobile-first from the ground up. All the while, we’re bringing the openness of the web back to mobile.


For those doing mobile development, the DevTools’ Device Mode Paul describes in the video are worth trying. You’ll need to get the Beta version of Chrome (Canary). The quick download can be found at http://www.chromium.org/getting-involved/download-chromium.

Posted in Uncategorized

How I (Finally) became a “Mac Person”


Here’s how I switched to Macintosh for my primary development environment. (I still use Windows for debugging and Outlook/Exchange).

After transitioning from Neptune Web early this year, and then starting DALSASS.mobi, I went to visit my old boss, Matt Krom, from NFIC, then later, Banta Integrated Media (now R.R. Donnelly). Ah, my first real job. Age 24, I started as a web developer in 1997 at the America Twine building in Cambridge, just a block or two from CIC, where I am now. The pre-Google search engine, Northern Light, was in the same building. We were on lunker Solaris 8 workstations back then, which I found clunky.

In 1998, after seeing a coworker’s setup, which had better software like Dreamweaver, I had switched to Windows NT, which felt really slick compared to Solaris. Others, like Matt, started going in the Mac direction. Yet, I had stayed on the Windows train longer than just about anyone else.

In 2014, had just bought a sweet Dell Latitude 6430u laptop and was running Windows 7. At CIC I noticed that everyone was on a Mac. Linux was a distant second. There were only a few people were on Windows – and very few were developers.

Sometimes you do something for so long, you just never consider switching until something big changes your perspective. For example, when you start a new business, all your assumptions suddenly change. That’s looking back. But I’m writing this post because I think my switch reveals underlying industry trends, which might help predict the future.

Some of you will point out that there are a ton of options out there, like VirtualBox to boot multiple operating systems. This is great for testing. But as a developer, I want a single, familiar, comfortable environment where I work every day.  After all, there are only so many different keystrokes a person can remember. (For me, I hate to have keystrokes commands cluttering up my head unless I use them every day. Life is a river, man.)

I’ve always had a foot in the Mac/Apple door. My wife is a graphic designer and a Mac person. During my career, I would frequently borrow her iPhone for Photoshop, fonts. When I started app development in 2009, I borrowed it for testing my mobile apps and for XCode (the IOS development software, which is only available on a Mac). Having to borrow her computer so often started to annoy us.

I stuck with Windows because of:

1. Good integration with “business” tools, specifically Exchange, but also MS Word.

2. We were loyal Dell customers which just went hand-in-hand with Windows PCs.

3. Lower cost for the horsepower.

4. Ubiquity among corporate clients at Neptune. Trust me. If you get a client who views sites on IE browsers first, you’ll be working on Windows first.

5. A perception of the Apple world as cultish and elite.

Here’s why I finally switched.

1. Perception of lower quality tools. Why not have the best tools? The cost difference for a small company is less important than that. It’s not like I have to upgrade an entire enterprise.

2. The nail in the coffin. App development. DALSASS.mobi develops apps for both IOS and Android, using both native and HTML5 components. Xcode doesn’t run on Windows. Android Studio runs well on a both.

3. No more IE. Chrome and Firefox have surpassed IE almost completely – even in corporate environments.

4. Development style changes. My development has become more and more distributed over the past few years. Instead of using  CVS and developing on a common server, I started using Git with a local version of the entire development site on my laptop. I started doing automatic deployment to the cloud. And I was deploying to Linux, so why not have something which resembled production more closely in development? File path differences between *NIX-based systems and PCs were starting to drive me nuts.

5. Changes in corporate IT. The association between the corporate world and Windows has lessened in the past few years.

6. Web based calendaring and email. Gmail and other web and mobile email/calendar solutions becoming more accepted in companies.

7. Graphic design Zen. I’m not a graphic designer, but there were things I knew were missing. For example (I found this out after the switch), you can’t easily test high definition graphics without a high definition display. (Though to be fair, high resolution is now available for some PC laptops and monitors).

So I made the switch and got a Mac Book Pro 15″.

It’s been simple transition because I was on Cygwin or on a browser so much anyway. As expected, the overall quality of the Mac experience is much higher. Things just work here.

I still want my email stored locally, for privacy and so I can read it offline. So, I haven’t switched to Gmail. I am still on Exchange/Outlook and go back to my PC for that.

Now I don’t have to borrow a Mac anymore. My wife loves that!


Posted in IOS App Development

Yes, Hybrid Apps can Receive Push Notifications!


Many people don’t realize that Hybrid Apps have the ability to do Push Notifications. Push Notifications are a great way to “speak” directly to users with your app. Here’s why:

They are direct. Push Notifications wake up the user’s device – similar to to a text.

They are always present. Your icon appears at the top of the screen. The icon is visible anywhere within the device (even while running other apps). (In web terms, this is  better than “above the fold” ever was.) 

Android Notifications

Android notifications on T-on-Time provide features to prevent users from getting annoyed, such as “Stop Today”. “Stop Today” disables the alerts for the duration of the day. In addition, “Settings” takes the user right to the notification settings panel.

Push notifications give your app a native touch. This is important in a Hybrid App because it addresses a fundamental weakness. One of the downsides of a Hybrid App is that user’s may suspect the quality of the app is lower than a fully native app. Sometimes this feeling is described as the “Uncanny Valley“. The user wonders… is this app just a “website impostor” in my device? Adding some native component can reduce this unease. 

But Push Notifications aren’t that easy. For developers, whose apps must on a variety of different devices, setting up Push Notifications is a lot of work. There are a few reasons for this: 1) Accidental notification messages can really annoy users. This means you have to test really, really carefully before you release. 2) If you aren’t willing to settle for a generic experience in the notification itself, then you’re working in native land. This means every device type needs to be developed/tweaked. 3) Push requires both front-end and a back-end, has a registration step, and (usually) relies on Google GCM or Apple’s Push Notification Service. This makes testing difficult. Plus more API keys!

There are a few services, notably Parse.com, which provide cloud-based services to make the Push Notification development process easier for multiple devices. I recommend services like Parse for getting a minimum-viable-product as quickly as possible. If you want something unique or special, you may need to go native on your Push Notifications and will likely need to build the back-end yourself.

I prefer to keep the number of dependent Cloud services limited. Adding cloud-upon-cloud service can be impossible when something gets lost somewhere between services that you have no visibility over.  Parse.com and Amazon SNS offer their own, independent notification services. This allows you to use a single, generic service for all devices instead of using Google GCM for Android, etc. Definitely consider these services if you are building cross-platform Push Notifications. Of course, the generic services aren’t going to offer all the unique features that Android and Apple offer. Check “Customizing Your Notifications” at Parse.com to get a sense of what options apply to what platforms.

Here are a few bits of advice to apply when developing Push Notifications:

1. Do not abuse push. People are giving you permission to alert them. Only send Push Notifications it if it’s important to the user. (Consider disabling push service completely at hours when people are likely to be sleeping).
2. Be very careful with bugs. Accidental pushes could be deadly to your reputation.
3. Design your app’s logic with Safety Mechanisms, Safe default values and Logging.
4. It’s difficult to track users back to a support email or comment. Keep your logic simple and testable. Store your users device type, if possible, but keep registrations anonymous for privacy.
5. Expect differences in compatibility with different versions on a device type. For example, on Android, expanded notifications are not available until 4.1.



Posted in Hybrid Apps, Uncategorized

In 2016, 50 percent of Apps will be Hybrid. Your Next Step?


Gartner’s 2013 mobile and wireless prediction: by 2016 more than 50 percent of mobile applications will be Hybrid.

What does this mean for you?

You may have already re-tooled your existing customer-facing websites to use Responsive Design in 2013. Read Why Your Business Needs A Responsive Website Before 2014. If people can’t access your site in mobile – they leave. They may leave without your knowing they arrived. And this could be up to 60% of traffic.*

*At minimum do a few quick tests on various devices to see how your site works. If your site is usable, but not a terrific experience, the urgency is much less. I’m talking about those sites (and there are many) that are completely unusable on mobile. Look out for modal popups that can’t be closed, unreadable text, difficult navigation, tiny hyperlinks and Flash.

If you don’t have a responsive site, it’s time to catch up. But if you are on track, it’s time to consider a Hybrid app as a next marketing step after Responsive Design. This is because you can re-use portions of your Responsive site within the non-native portion of your app.

What is a Hybrid App? A Hybrid App is found on the App Store (Google Play, iTunes, etc), but contains components derived from HTML from web or local store. A Hybrid App can vary widely in terms of what percent is native versus percent web. (“Hybrid” is a fairly generic classification. But I’ve found most people aren’t aware of the full range of possibility when it comes to apps.)

Hybrid apps are typically less expensive to build and maintain than native apps because the same code is used for all platforms. With a 100% native app, completely separate programming is required for Android and IOS (and Windows and Blackberry and whatever else the future holds).

It can also be less expensive to find developers with the skills to build Hybrid Apps. This is because HTML, CSS, JavaScript developers are generally more available and less costly that native app developers.


T-on-Time™ is a Hybrid App. It runs on web, iPhone and Android and has a presence on both iTunes and Google Play Store. jQuery mobile is used for the interface, which mocks the iPhone look.

Another big advantage of a hybrid app is your ability to demo the web version (assuming by Hybrid you also have a web version available). Compare the amount of time it takes to demo the web version of T-on-Time vs. the App Store Version. It’s always a hassle having to pull out the phone, search the store, install, download etc. all the while looking over a potential customer’s shoulder. Besides, it’s nice to see how it works before committing to the download.

A Hybrid app gets you in the app store. It provides legitimacy to the brand and an additional place for your logo and marketing copy. Google ranks app store pages very highly because an app is a sign of legitimacy.

Having a shortcut on a users home phone screen is a huge benefit. Down the line, you can reach out directly to users using push notifications and app updates.

Do you have a configuration tool, product finder, web product, documentation index or cost calculator on your web site? Is it already responsive? If so, this could be a good starting point for a Hybrid App.

If you are trying to decide whether to go native vs. Hybrid, DALSASS.mobi recommends Martin Fowler‘s approach in Developing Software for Multiple Mobile Devices. Choose the “Laser” if your product is the app. Choose “Cover-Your-Bases” if your app is a marketing channel and user experience is less critical than reaching as many devices as possible. (Be careful with poor UI/UX. I’ve found user experience is always more important on mobile than on desktop.)

Read more ›

Posted in Hybrid Apps

Hello Cambridge Innovation Center!


Charlie at CIC C3 in Cambridge

DALSASS.mobi has moved into the Cambridge Innovation Center on One Broadway.

This is a great place to be. The talent, energy and excitement here is high. (I’ve already met many top-notch mobile developers here.)

CIC offers more perks than any of the other co-working spaces in the Boston area. Wireless, networking events, shared monitors – even a towel service. I looked at 4 other Boston co-working spaces – they don’t compare.  Seriously – it’s not even close.

I’ve found so many people here who are willing to reach out and support one-another’s business grow. It’s part of the “community” ethos here.

And it make sense.

Without a tight community, it’s nearly impossible to stay up-to-date with Internet and mobile technology. It’s just moving too fast.

There are so many advantages to being close to Microsoft, MIT, Google (and other great organizations).

FYI: For now, I’ll be keeping my existing address. However, as the business grows, I am planning to switch over to the One Broadway address entirely.

Posted in Uncategorized

Using Amazon CloudFront to Improve Global Web Site Performance

Update: Oct. 16, 2013, Amazon Announced “POST” support for CloudFront. See https://forums.aws.amazon.com/ann.jspa?annID=2179UpdateOn June 12, 2013 Amazon announced  “CloudFront Custom SSL Certificates and Zone Apex”. Now you no longer have to change the domain of your SSL site (point 2 in checklist below) and you do not have to treat canonical domains in a special way (point 9 in checklist below). This is a significant improvement since I wrote this post in March. I’ve put asterisks next to the points below, so as not to change the original post.

Neptune recently migrated a large, multilingual, international web site to Amazon CloudFront.

It’s not perfect. But Amazon CloudFront is a service we’d definitely recommend to our clients. Here’s why:

  1. Improved website response time – giving a faster, slicker web experience for all users.
  2. Improved website response time specifically for international web users.
  3. Low-cost to configure compared to adding additional infrastructure.
  4. Allows you to keep your existing hosting infrastructure, c/o “Custom Origin” option.
  5. No need to change URLs (assuming “custom domain” is configured).
  6. Nearly unlimited “bursting” traffic capacity without having to setup new infrastructure.
  7. You retain complete control over your DNS. (A competing service, CloudFlare, requires domain control.)
  8. A better performing site may increase search traffic and sales.
  9. Minimal commitment.

Amazon CloudFront is a fairly new cloud-based service. Competitors include CloudFlare and Akamai.The service places geographically located, HTTP caches “in front” of your existing site, caching or proxying both static and dynamic content. CloudFront is like having an HTTP cache (examples include Squid or Varnish) in most major cities of the world. It also includes a dynamic DNS system to request users to the nearest “Edge” location based on their DNS resolver’s IP. When requests require no caching, the request is passed (or proxied) back to the origin server.

Since both static and dynamic content is served, the changes to your site are theoretically quite minimal. In reality, this depends on how dynamic and complex your site is. Here is a checklist of things that may need to change:

Amazon Cloud Front Migration Checklist

1. POST data cannot be passed through CloudFront.

This is probably the most difficult one. If  you have forms that require POST (and can’t be easily converted to GET), I recommend setting up a new domain name for your site where forms can be posted. If your original site was “www.acme.com”, you’ll need to set up a new domain such as “origin.acme.com” (this can be any domain you choose, such as “secure.acme.com”, “post.acme.com” etc.). You’ll need to change all form actions to POST to this URL or update links to forms to go to the origin site. Once the form is complete, I recommend redirecting the user back to the www site to make use of CloudFront.

2. You’ll need to change your domain name for your SSL site. * (this no longer applies as of 6/12/2013, see update at top of page)

Another tough one. If you were hosting https://www.acme.com, you will need to purchase a new certificate to https://origin.acme.com. You can host your SSL content at https://d1dkq6joi5aul.cloudfront.net (the distribution URL), but you probably don’t want a URL which looks like that. Don’t forget that https://www.acme.com links may be linked all over the Internet in the form of external blog links and comments.

DNS for http:// and https:// can only point to once location. Once you make the switch – https://www.acme.com will print out a nasty “This Connection is Untrusted” for all users if you have not completely disabled it.

3. You’ll need to carefully modify the HTTP cache-control, Expires and Last-Modified headers on your existing pages.

When I first started researching CloudFront, I was under the impression that setting the TTL within the “Behaviors” would mean I didn’t have to modify  headers on my site. This is not the case. You need to become an expert on these 3 headers and gain complete control over what your existing pages use. I found the TTLs that Amazon provides to be fairly useless. I was a bit disappointed that I can’t use the simple web interface to adjust caching reactively during high-traffic times. I have to make a programming change to do this.

First of all, you’ll want to give all static content a far-future expires header. e.g. I typically do this in a global Apache rule.

# force caching for more speed of static content
<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
Header set Expires "Thu, 15 Apr 2020 20:00:00 GMT"

That’s easy. Pages are more difficult.

(This example uses PHP. Other platforms will be similar.)

First, I added an included file at the top of all pages.



For pages I wanted to cache for only a few minutes, I include cache-control.php as below. Notice that I only modify the caching if the User Agent is Amazon CloudFront. This ensures that my existing site doesn’t break.


For pages that I never want to cache, I call session_start() in my PHP. Most of my dynamic pages happen to do this anyway, and this gives me the default Expires and cache-control headers which prevent all caching. Of course, these headers can be set using “header()” if you don’t need the overhead of session_start().

On a page which should never cache, CloudFront gets the following headers:

Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0

You’ll always see a “Miss from cloudfront” in the “X-Cache” header of these hits.

On a page which should cache for maximum of 120 seconds, CloudFront receives these headers:

Expires: Sun, 24 Mar 2013 02:48:30 GMT
Cache-Control: public, max-age=120
Last-Modified: Sun, 24 Mar 2013 02:46:30 GMT

I can then test my headers using wget, using the options -S (show headers) and –header (set user-agent). e.g.

wget -S http://www.acme.com –header=”User-Agent: Amazon CloudFront”

I highly recommend starting out with very limited caching of your page. You don’t want to deploy this; feel like a hero because your site is faster; and then slowly watch the bugs start coming in as you start desparately rolling back caching. These bugs are particularly hard to track down because no one notices them at first.

4.  IP addresses and User Agents will no longer be present to your site and will not be found in logs.

All user agent strings will come in as “Amazon CloudFront”. IP addresses will not be your user’s IP addresses. This  may require programming changes if your content is location based.

Log based web statistics will no longer be accurate. You should expect these statistics to change dramatically anyway since so much traffic is off of your server.

5. If you set up an additional domain, such as origin.acme.com, you’ll need to track sessions across sub-domains.

This is easy in PHP.

# for cloudfront integration and use of origin.acme.com
php_value session.cookie_domain acme.com

6. If you set up an additional domain, you’ll need to make sure Google analytics tracks both sub-domains as one.

Just use:

_gaq.push(['_setDomainName', 'acme.com']);

7. Social Media sharing links may need to be configured differently if a second domain is used.

8. You will need to make DNS changes.

Simply CNAME your domain to the distribution domain provided by Amazon.

9. Canonical redirects (www.) redirects will not work through CloudFront. * (this no longer applies as of 6/12/2013, see update at top of page)

For SEO purposes, the “base domain” e.g. “acme.com” usually is 301 redirected to the www. domain. My approach was to use www. site for CloudFront, but leave the “acme.com” site at the origin IP. When users access “acme.com” they are redirected back to www.acme.com by the origin server.


You will inevitably find yourself troubleshooting to figure out why something did or did not cache. Here are some tips.

1. Age header.

Watch the headers from responses in Firefox Firebug or Chrome Network view. The “Age” header will tell you how old the content is.

2. X-Cache header.

Will tell you whether CloudFront hit or missed the cache.

3. Amazon support provided this tip. Trace-routing CloudFront domain first.

traceroute d1dkq56j3333dl.cloudfront.net

This allows you to identify the major geographic location your content will be fetched from. For example, the traceroute shows this request is going to the France data center (d1dkq56joi5aul.fra6.cloudfront.net).

Next, use curl with Host option  to see what is returned from that edge location.

curl -I -H "Host: d1dkq56j3333dl.cloudfront.net"

 4. Develop a script you can host in multiple geographical locations, which fetches URLs from edge locations it finds.

I’ve included the version we used – remote_test_cloudfront.php (zip, GNU licensed). This script can be invaluable when testing a site as seen from multiple locations.


Here are a few issues I found with CloudFront.

1. When invalidating content – “/” is not the same as /index.html – even if you specify your “Default Root Object” to be “index.html” with Distribution settings.

2. Documentation is thin and not that clearly written.

3. Configuration options are very basic.

4. No ability to accurately change length of caching via CloudFront interface – requires technical changes to headers in site.

Yet, despite the “peeves”, this is a really useful service.

Best of luck with your CloudFront migration. As always let us know if you’d like us to assist.

Posted in Cloud Deployment and Support

How Web Developers can Build Apps in HTML5 (and natively on Android )

T-on-Time mobile on Android

T-on-Time mobile on Android

In June,  Neptune Web  launched a new version of T-on-Time, the web app which takes advantage of MassDot’s new real-time data feed for the commuter rail. I’m proud to say we were the first app to do this, since we released the app within a matter of days after the feed was announced.*

*Contrary to “apps have already been released” in the press releases, it did not take 2 days to develop this app. We participated in an open, MassDOT, trial feed. The actual app and its constituent parts took months to develop and test.


In a previous post, I had talked about the Desktop version of T-on-Time, developed in Adobe Air – a platform which I still feel has great promise. T-on-Time Desktop was a contest entry, done mostly on “personal time”, and was more of an amateur (as in “not full time” – not as in “rookie”) effort.

The new T-on-Time “Suite” was a corporate effort, and was much more complete in its approach.

In this version, the addition was the mobile version. However, to retain the value of the original Adobe AIR Desktop version, we combined Desktop with mobile to create a “suite” of commuter tools. The entire “Suite” now consists of:

For web developers, it should be noted that each of these pieces can take a lot of time, and can affect the cost of your project. For example, one would think that developing an Android app code, you automatically get Android store presence. However, you still need to market your app within the store. Developing the images, copy and configuring your account within Google store are fairly time-consuming tasks. Developing that 3-4 page “app” marketing web site can be misleadingly time consuming. Although “app sites” are more or less templatized, coming up with something everyone agrees on is another story entirely.  If you take this approach, be sure to add that time into your estimates.

That covers the background. Why did we decide to cover HTML5/web , followed by Android native, first? Why not cover iPhone or Blackberry?

As I mentioned in my previous posts, I have focused on technology web developers are familiar with first. Learning Objective C and XCode  gets web developers (who are familiar with Javascript/HTML/CSS and web server-side languages such as PHP or ASP.NET) way overextended. This is a big problem with the iPhone platform for me. Blackberry poses the same problem. Although Android has a similar environment, I find it much more open than xCode/iPhone, using the Java language, and with a faster growth projection.

HTML/JavaScript is really the “first” platform for web developers. I hope someday the Android market and iPhone store become mere marketing vehicles for people to find your apps – no longer the only way to deliver your app. I’m sure Google and Apple don’t feel the same way.

Anyway, here’s how the HTML5/Android app works.

  1. First, develop for HTML5/web only, developed in the browser. You can use all the familiar tools. Firebug or Chrome debugger. No need to compile or run a (very slow) emulator. Just reload your page to see your changes using local HTML/CSS and JavaScript. (See note below about disabling cross domain restrictions for local code).
  2. Build an Android “shell” to hold the HTML5 app, using WebView classes.
  3. Convert your data storage to use native Android. Use the JavaScript integration to make functions available to conscript, which call native Android features.
  4. Load your local content, containing Javascript and even jQuery, found in the /assets/ folder like this webview.loadUrl(“file:///android_asset/tontime.html”);
  5. Possibly build an iPhone shell
  6. Share code with web version and “assets” folder of Android app

A sample Android resource, src and Manifest.xml (Zip Format) to get you started with your Android version is included here.

A few things worth mentioning:

  1. Never rely on the Internet unless you really need it. For example, don’t use cookies just because they are familiar to you. If you just need a place to store data, use local storage instead.
  2. When testing your app in the browser, you can avoid XSS/Ajax cross-domain limitations by testing in Chrome like this:


C:\Users\me\AppData\Local\Google\Chrome\Application\chrome.exe --allow-file-access-from-files --disable-web-security
                Be sure to run as Administrator on windows.
  1. Repeated downloads of XML content from the Internet can make your Android cache HUGE, and I found no way to reduce size of the Android cache (there were examples for Android 2.2, it’s just that none worked). I chose to clear the cache myself, by calling a Javascript interface function periodically e.g.
                public void clearcache() {
                                // trick to keep cache from getting huge since turning if off doesn't work.
                                Activity t;
                                t = (Activity) mContext;
                                WebView wv = ((tontime) mContext).getWebView();
  1. When using the Javascript Interface, be sure to check the values returned from Android functions. Type mismatches between conscript and native functions cause very difficult-to-troubleshoot bugs.

This type of development can also be achieved using frameworks. However, don’t forget there is a huge hidden cost to using frameworks. The costs are a) steep learning curve for you (and others. Never forget the others who will maintain your code after you) b) lack of ability to troubleshoot deep technical problems due to the layer between which hides things from you.

Well gotta run (to catch a train). Best of luck with your Android/HTML5 apps!

Posted in Android App Development, Hybrid Apps, IOS App Development

Improving Magento Checkout Performance with Large Number of Cart Rules


If you’ve had problems with performance in your Enterprise 1.8 Magento cart and checkout process, it could be due to a large number of sales cart rules.

We’ve found more than 25 rules will make performance completely unacceptable. The problem occurs when large # of sales cart rules, combined with the number of items in your cart. This is exacerbated when there is a large number of attributes on products.

Try removing all of your cart rules to see if this is the source of your problem.

Each cart sales rule makes a call to the product load() function which (in our dedicated environment) costs .100 (1/10) second. This function is very slow, because it has to compose the Entity-Attribute-Value (EAV) records from many tables (see details on EAV on Magento’s site).

Multiply products in the cart by number of rules to estimate how slow your cart or checkout page will be.

To fix this, just override the product->load() function. Create a simple global to store previously loaded products. (Obviously, this “cache” is valid for 1 request only).

Now, calls to load() are based on the number of items in cart only. This can make a huge difference in performance – and in your users’ shopping cart experience.

Here is a code sample of how your overridden Product class should look:

	public function load($id, $field=null) {
	if (isset($ECV_GLOBAL_CACHE)) {

	} else {
			$ECV_GLOBAL_CACHE = array();
	if (isset($ECV_GLOBAL_CACHE[$id])) {
					//error_log("since $id was already loaded, we return global version - cache it");
					return $ECV_GLOBAL_CACHE[$id];
	} else {
					$ECV_GLOBAL_CACHE[$id] = parent::load($id,$field);
					//error_log(" $id was NOT already loaded, call ev tables - no cache");
					return $ECV_GLOBAL_CACHE[$id];

I hope this helps you out. As always, contact us know if there is anything we can do to help support your Magento installation!


Posted in E-commerce, Legacy, Magento, Uncategorized

Identifying Magento Performance Problems with the Magento Profiler


The Magento Profiler is used to identify performance problems on the server side.  The Profiler can help you find PHP functions which use up too much CPU or functions with slow database queries.

These problems will first be noticed if you have high load on your server. Apache processes can be seen using “top”, where you will see apache or httpd processes jumping to the “top” using a large percentage of CPU.

Using the Profiler requires a fairly deep (e.g. time consuming) analysis, so make sure you are barking up the right tree before proceeding with this. You’ll want to eliminate any front-end issues (such as loading large png files, too many css or javascript files, content compression, unnecessary Javascirpt, etc.) to be sure your problem is really server side (The “YSlow” firebug plugin is a good resource for client side problems).

Magento is very resource intensive, and many shared hosts will not be able to run it with decent performance.

Make sure your problems are not related to your database. Login to MysQL. Run show “process list”, as you go browse through the slow areas of the site. If any queries stay on the screen as you watch, you probably have a database performance problem.

Finally, make sure your problem is not a networking relating issue, such as a slow or faulty internet connection, or firewall.

Generally, look for the first page hit using Firebug Net view to see the total server side time required to generate the page. The Magento profiler is limited to this first page hit – so make sure you know how much performance you can actually gain. Focus on the greatest performance as a percentage of the overall time to view the page, to be sure you are getting low-hanging fruit first.

Be sure you know what’s going on with your cache. If you are using caching, the difference between the first and subsequent hits can be huge, and will throw confusion into the mix, giving you meaningless results. I recommend adding some comments using PHP error_log() function, (tail -f the web error log), so you know when the Full Page Cache is used. See my previous blog post on the Full Page Cache.

As with many Magento problems, I’ve given up trying to find documentation or explanations online.  Although the architecture is technically beautiful and the code very well written, documentation can be very spotty. You can occasionally strike gold on the community site, but I’ve found the most direct way to approach many Magento problems is to read the source. Once you go through the code, you find out that the feature wasn’t as complicated as you thought it was. It’s this way with the Magento Profiler.

However, in this post, I’ll try to save you some pain reading the source, by sharing some experiences on how I’ve used the Profiler.

I’m assuming your running Magento Enterprise 1.8.

First, enable the profiler via System -> Configuration -> Developer -> Profile (yes). This enables the profiler, but does not fill in any of the benchmark times.

Comment out the following line in /index.php

# toggle this to enable profiler.

Next, refresh the page you are optimizing. At the bottom of the page, you will see the performance table.


Fig 1: Magento profile data at bottom of page.

This table is impossible to read directly inline, since the HTML is placed outside of any HTML or body tags. Go into the source and copy the entire page contents. Paste in Notepad or any other text editor. Eliminate the regular page HTML , leaving only the HTML which builds up the performance data table. Save to a temporary HTML file and then view via Internet Explorer to view the static page. (IE allows you to copy the table from HTML to Excel)


Fig. 2: Magento stats loaded into Excel. (Click to enlarge.)

The call to Mage::app should be at the top of the list. This is the full time of your request is taking to run on the server (minus process startup). It’s what you want to reduce as much as possible. The code found in app/Mage.php is what “marks” the start and end points to profile.

... core magento code ...

I Ignore the memory usage stats. (If you’ve figured out how to make these egregiously large numbers have any meaning, leave me a comment.) Under normal conditions, Magento chews up around 50 Megs of memory per process. If you are running data loading  scripts, it can use up much more memory if there are repeated instantiations of Magento objects (users, products, etc).

The number of instantiations is very meaningful, as it will tell you if there are unnecessary objects) being made, possibly through some customizations you’ve made. But don’t assume the Magento code is perfect either. We found out that any more than 10 shopping cart rules will slow the performance of the cart to a crawl, due to repeated calls to EAV load table. (NOTE: EAV load calls are very expensive, performance wise. Each one costs about 1/10 of a second. I’ll post another blog article on that solution if it will help someone – let me know.)

The column “Time” indicates the total time spent between the “start” and “stop” calls within “Cnt” instantiations. To resolve your performance problem, look for large numbers of instantiations, resulting in large “Time” values within this report.

Use recursive grep on the source to find out what is being measured within the profile report. E.g.

grep -r "Varien_Profiler::start('mage'" *

Also, you should be able to add your own Varien_Profiler::start() and stop calls within the code (though I haven’t done that).

Good luck and I hope this article helps with troubleshooting your Magento performance problem. Leave me a comment if you have more information or need some help.

Posted in Content Management, E-commerce, Legacy, Magento