Device independence promises more than it delivers
Have you been sold on the sales pitch that the solution you are about to buy is device independent? A single source code only needs to be maintained and will deploy seamlessly to all variety of devices it may encounter in your particular case.
The theory behind this setup is very attractive and I understand why it’s a common approach by solution providers to take especially at design time and when forecasting the solution run-costs. However here is one where the devil is in the details.
I would list three different approaches under this topic. The pros and cons of each is covered in the graphic. When I say proprietary independence I mean the products which add a proprietary layer of independence often in the form of a third party product with which the solution provider partners.
This is the highest level of device independence as it’s a widely used standard, the skillset is readily available on the market and its close to universally supported. If performance and device specific features are not important while speed to market, cost and longevity is, this is the solution to go for.
These solutions - such as PhoneGap, Titanium, etc. – become a compromise between HTML5 and Native attempting to extract the best of both worlds. Now, this is important, make sure when you select a solution that you are truly getting the best of the worlds you are interested in as there are downsides to these solutions. To name a few, the added independence layer decoupling the device platform from the application adds complexity both technically and from a support perspective as typically you start relying on yet another external company. This now requires multiparty support to be engaged which can be very tricky to make happen at decent timescales. The layer will slow down the overall performance so make sure this is acceptable especially if you deal with large data volumes or complex data transformations on the device.
Native development may to many seem like the worst option and it may be in some cases. However there are also situations - such as the examples just mentioned - where native really shows it's strength. Also, native does not need to mean that the logical architecture of the application is different. The logic can be kept identical across multiple platforms and you enable a tightly knitted development team to maintain the exact lines of code across the different platforms. The message I'm trying to get across is that native should not be discarded without a proper study of exactly what you are expecting out of the application. In more cases than you think, it is a very attractive option even for large enterprise apps.
What is your own experience on the above? Feel free to share comments and examples of where you've had to make this choice, what you went for and any learnings you believe the community should be aware of?