For the last 6 years I asked myself this recurring question, and now I think the time has come, where I can finally answer it from a personal point of view.
Most of us have tried it. We have made a software design where we use a piece of technology, which have never been used in the company before. Everybody agrees on the design within the design group, just to get it shot down by the architect or the management, due to the use of one ore more new technologies.
Just to clarify. When I say “new technology” it’s not new as by time since creation/release, it’s in the term “new to the company”. So what someone have been using for a decade, can still be new to others.
You then go back and re-design the solution, either completely finding a suitable solution within the technological limits of your architecture, or by “hacking” your design to work on the given technologies. It is the latter one I am referring to when I say it will slowly kill you application.
My thesis is that by limiting the developer in the use of technologies. We risk killing a innovative and simple solution to a problem, ending up with a sub optimal, complex and hard to maintain solution instead.
In my experience, there are two common cases where the technological limitation becomes a real problem. First one is where the company stands firm on their choice of technologies on their platform. Second is where the developers are allowed to use the requested technology, but either don’t get the time required to learn it, or just use it wrong. I will try an give an example of both below, using a storage engine technology limitation as an example.
Lets say we are working i a company were the storage technology used is a relation database, and we are to develop a new software module. We know that the module will receive data payloads and it needs to persists these, so they can later be searched in and fetched for further processing.
The data payloads received are isolated, have multiple nesting, and comes in a semi-structured format. Already here we can see that a relational database is not suitable for this module. Instead you could argue for a semi-structured key document database. This would allow the application to persist, and work with the payload directly in the received structure.
Example 1: Make it work
In this first example we assume the fear, or just business choice, not to introduce new technologies are enforced. Thereby dictating that the module needs to use the existing storage technology.
As I stated earlier, we are not looking at the scenario where the developers make a completely new design, taking the dictated technology into account. We instead assume the design is just modified to match the dictated technology.
Most of the times when I have seen this exact problem, the result ends up being a small data transformation layer in the ORM. This transformation layer have the responsibility of converting the nested payload to a relational structured format, and back again if required.
Not only does this solution add additional code to understand and maintain, it also adds additional complexity and a sub optimal database structure to accommodate for the semi-structured format.
Example 2: Have it your way!
In this second example the company chooses to let the developers use the new technology, thereby keeping the original design.
You should think this is the perfect scenario, and everything goes into a higher level of perfection. Unfortunately this is not always the case. I have seen time and time again where the victory on use of a technology, often comes at a compromise on development time, because it’s the whole reason for using it right!
When this happens, it is often the time used on learning the new technology that is cut, thereby making the developers rely on their current knowledge and experiences.
So the developers starts implementing their design on this new semi-structured key document database, but none of them have actually worked with such a storage engine before. So they end up using their new shiny storage engine as a relational database, because it’s what they have experience with.
Not only is this solution more complex, it is also harder to maintain, and adds a new technology to the platform. Furthermore we also get absolutely none of the benefits we wanted from the new technology.
When I first stood with this problem, I had a discussion with the director at the company. His argument was that by adding a new technology to the architecture, we would need to either train existing personnel, or hire new people competent within the new technology. Not only developers, we would also need people with the experience to install, support and maintain the installations.
Back then I thought that was a pretty good argument, as most things was still hosted and maintained by the companies themselves. Today on the other hand, most technologies can be hosted in a containerised environment, based on official production ready images, or bought as a cloud service. Which means it’s a less of a issue today, especially compared to the long term consequences of working around the technology limitations.
To sum it all up, is limitations on technologies, whether it is storage, queues, programming languages, etc. really killing your application. In my opinion it might, due to combination of increased complexity and additional code to understand and maintain. It will not have a consequence a the time of development, but someday all these workarounds and additional code, might just be the thing that prevents you from hitting a deadline or add that new cool feature.
So is the right thing to always allow the developers to use the technologies they want. Simple answer, NO, it needs to make an noticeable difference on the implementation and it requires the proper understanding of the technology in question. Your developers don’t need to be experts to use a new technology, but they still needs to have the basic understanding.
Another thing to remember, as you get new technologies into your platform, other applications can use them, giving you a recurring benefit for the same initial cost. It is another tool in the developers toolbox when they are designing the application, and it can save them from frustrations in the future.
My personal opinion is that if it makes a difference, and it’s used correctly, then the initial investment in learning, and the ongoing cost of maintenance, is much smaller than the long term cost of not using it.