Memory Allocations and Automation: Generalism versus Optimization


Interviews bring with them interesting technical questions and different world views. Here’s one about memory management.

When I started looking at places to go for my new gig, I went to a few interviews at startup companies. It was an interesting experience – after years as a developer and a marketer, I had to sit in front of other developers and sell myself.

Now you need to understand, I come from a background of developing SDKs for other developers. In my line of business, you have no clue who will be using your code and for what. It can be an embedded client with puny memory and CPU or it can be a huge servers farm running a five 9’s telephony service. This usually meant doing everything manually and preparing for the worst.

In one of my interviews, I was asked to “develop” a system that saves and loads graphical elements in an image: things like rectangles, circles, text areas, etc.

So I did. And with my usual thinking, decided to make my structures small. Very. And then deal with memory allocation on my own, or rather in a way that made it tight – as little malloc() or new calls as possible. This didn’t work out well with the guys in the room…

It went religious and from then on things only deteriorated.

My own statement was that you can do things faster if you do them manually since you are the king of your castle: you know your application and its behavior (hopefully), so you can design memory allocation to fit your needs. The startup, on the other hand, relied on Linux and Intel to do dynamic memory allocations in the best of ways for them – don’t fix things that ain’t broken.

With any good technical debate, this one left me with some uneasy feeling, and the best way to thwart that is by going to the Google oracle, or in this case – stack overflow. There’s a similar question there with answers that go both ways. No help there…

I guess it really is a matter of priorities. A university professor which I highly value once told me that you need to first build the product and only then start optimizing. I believed in architecting the product with optimizations in mind from day one. When your application gets big enough that you need to scale it – you will need to optimize memory allocations. Either do them by hand instead of dynamically or go find yourself a better memory allocation mechanism (or garbage collection mechanism – depending on the language) than the one provided by default by the operating system.

The things to ask yourself are:

  • Do you think you will need to optimize? How dynamic is your program? How much throughput do you expect to have in it?
  • Can you leave optimization to a later stage? Can you at least take it into consideration in your initial design to make it easier to do in the future?


Ori Albin says:
February 20, 2012

This seems to be a classic Pareto principle problem (or refers to the 80/20 rule) . Are you willing to design your product relaying on the 20% that may encounter problems in the future or the 80% that will supply you with a working system.

Although the answer isn’t clear, as the last 20% usually encounter 80% of your time.

I usually relay on working elements and reuse as much as possible than re-invent the wheel. Inventing things that work even for an improvement won’t be the best solution every time. Mostly you will encounter the sane problems that the previous team ran through. You’ll be wasting your time on fixing bugs that the “all ready” system has eliminated already.

You should of course design your code as fluent as possible that the memory allocation mechanism could be easily changed if these problems that you fear of pop-up.

The key to this issue for my opinion be thought through the design phase. The optimization for these problems must be in thought at the documentation level as therefore it will be a big mess to fix if needed.